WorldWideScience

Sample records for tvl1-l2 minimization algorithm

  1. TV-L1 optical flow for vector valued images

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Roholm, Lars; Nielsen, Mads

    2011-01-01

    The variational TV-L1 framework has become one of the most popular and successful approaches for calculating optical flow. One reason for the popularity is the very appealing properties of the two terms in the energy formulation of the problem, the robust L1-norm of the data fidelity term combined...... with the total variation (TV) regular- ization that smoothes the flow, but preserve strong discontinuities such as edges. Specifically the approach of Zach et al. [1] has provided a very clean and efficient algorithm for calculating TV-L1 optical flows between grayscale images. In this paper we propose...

  2. Robust Non-Local TV-L1 Optical Flow Estimation with Occlusion Detection.

    Science.gov (United States)

    Zhang, Congxuan; Chen, Zhen; Wang, Mingrun; Li, Ming; Jiang, Shaofeng

    2017-06-05

    In this paper, we propose a robust non-local TV-L1 optical flow method with occlusion detection to address the problem of weak robustness of optical flow estimation with motion occlusion. Firstly, a TV-L1 form for flow estimation is defined using a combination of the brightness constancy and gradient constancy assumptions in the data term and by varying the weight under the Charbonnier function in the smoothing term. Secondly, to handle the potential risk of the outlier in the flow field, a general non-local term is added in the TV-L1 optical flow model to engender the typical non-local TV-L1 form. Thirdly, an occlusion detection method based on triangulation is presented to detect the occlusion regions of the sequence. The proposed non-local TV-L1 optical flow model is performed in a linearizing iterative scheme using improved median filtering and a coarse-to-fine computing strategy. The results of the complex experiment indicate that the proposed method can overcome the significant influence of non-rigid motion, motion occlusion, and large displacement motion. Results of experiments comparing the proposed method and existing state-of-the-art methods by respectively using Middlebury and MPI Sintel database test sequences show that the proposed method has higher accuracy and better robustness.

  3. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...

  4. Combine TV-L1 model with guided image filtering for wide and faint ring artifacts correction of in-line x-ray phase contrast computed tomography.

    Science.gov (United States)

    Ji, Dongjiang; Qu, Gangrong; Hu, Chunhong; Zhao, Yuqing; Chen, Xiaodong

    2018-01-01

    In practice, mis-calibrated detector pixels give rise to wide and faint ring artifacts in the reconstruction image of the In-line phase-contrast computed tomography (IL-PC-CT). Ring artifacts correction is essential in IL-PC-CT. In this study, a novel method of wide and faint ring artifacts correction was presented based on combining TV-L1 model with guided image filtering (GIF) in the reconstruction image domain. The new correction method includes two main steps namely, the GIF step and the TV-L1 step. To validate the performance of this method, simulation data and real experimental synchrotron data are provided. The results demonstrate that TV-L1 model with GIF step can effectively correct the wide and faint ring artifacts for IL-PC-CT.

  5. An optimal L1-minimization algorithm for stationary Hamilton-Jacobi equations

    KAUST Repository

    Guermond, Jean-Luc; Popov, Bojan

    2009-01-01

    We describe an algorithm for solving steady one-dimensional convex-like Hamilton-Jacobi equations using a L1-minimization technique on piecewise linear approximations. For a large class of convex Hamiltonians, the algorithm is proven

  6. An optimal L1-minimization algorithm for stationary Hamilton-Jacobi equations

    KAUST Repository

    Guermond, Jean-Luc

    2009-01-01

    We describe an algorithm for solving steady one-dimensional convex-like Hamilton-Jacobi equations using a L1-minimization technique on piecewise linear approximations. For a large class of convex Hamiltonians, the algorithm is proven to be convergent and of optimal complexity whenever the viscosity solution is q-semiconcave. Numerical results are presented to illustrate the performance of the method.

  7. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    Science.gov (United States)

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  8. A Fast and Accurate Algorithm for l1 Minimization Problems in Compressive Sampling (Preprint)

    Science.gov (United States)

    2013-01-22

    However, updating uk+1 via the formulation of Step 2 in Algorithm 1 can be implemented through the use of the component-wise Gauss - Seidel iteration which...may accelerate the rate of convergence of the algorithm and therefore reduce the total CPU-time consumed. The efficiency of component-wise Gauss - Seidel ...Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse Problems, 28 (2012), p

  9. Parallel algorithm of real-time infrared image restoration based on total variation theory

    Science.gov (United States)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  10. Beam orientation optimization for intensity modulated radiation therapy using adaptive l2,1-minimization

    International Nuclear Information System (INIS)

    Jia Xun; Men Chunhua; Jiang, Steve B; Lou Yifei

    2011-01-01

    Beam orientation optimization (BOO) is a key component in the process of intensity modulated radiation therapy treatment planning. It determines to what degree one can achieve a good treatment plan in the subsequent plan optimization process. In this paper, we have developed a BOO algorithm via adaptive l 2,1 -minimization. Specifically, we introduce a sparsity objective function term into our model which contains weighting factors for each beam angle adaptively adjusted during the optimization process. Such an objective function favors a small number of beam angles. By optimizing a total objective function consisting of a dosimetric term and the sparsity term, we are able to identify unimportant beam angles and gradually remove them without largely sacrificing the dosimetric objective. In one typical prostate case, the convergence property of our algorithm, as well as how beam angles are selected during the optimization process, is demonstrated. Fluence map optimization (FMO) is then performed based on the optimized beam angles. The resulting plan quality is presented and is found to be better than that of equiangular beam orientations. We have further systematically validated our algorithm in the contexts of 5-9 coplanar beams for five prostate cases and one head and neck case. For each case, the final FMO objective function value is used to compare the optimized beam orientations with the equiangular ones. It is found that, in the majority of cases tested, our BOO algorithm leads to beam configurations which attain lower FMO objective function values than those of corresponding equiangular cases, indicating the effectiveness of our BOO algorithm. Superior plan qualities are also demonstrated by comparing DVH curves between BOO plans and equiangular plans.

  11. Developing cross entropy genetic algorithm for solving Two-Dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP)

    Science.gov (United States)

    Paramestha, D. L.; Santosa, B.

    2018-04-01

    Two-dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP) is a combination of Heterogeneous Fleet VRP and a packing problem well-known as Two-Dimensional Bin Packing Problem (BPP). 2L-HFVRP is a Heterogeneous Fleet VRP in which these costumer demands are formed by a set of two-dimensional rectangular weighted item. These demands must be served by a heterogeneous fleet of vehicles with a fix and variable cost from the depot. The objective function 2L-HFVRP is to minimize the total transportation cost. All formed routes must be consistent with the capacity and loading process of the vehicle. Sequential and unrestricted scenarios are considered in this paper. We propose a metaheuristic which is a combination of the Genetic Algorithm (GA) and the Cross Entropy (CE) named Cross Entropy Genetic Algorithm (CEGA) to solve the 2L-HFVRP. The mutation concept on GA is used to speed up the algorithm CE to find the optimal solution. The mutation mechanism was based on local improvement (2-opt, 1-1 Exchange, and 1-0 Exchange). The probability transition matrix mechanism on CE is used to avoid getting stuck in the local optimum. The effectiveness of CEGA was tested on benchmark instance based 2L-HFVRP. The result of experiments shows a competitive result compared with the other algorithm.

  12. The Adjoint Method for the Inverse Problem of Option Pricing

    Directory of Open Access Journals (Sweden)

    Shou-Lei Wang

    2014-01-01

    Full Text Available The estimation of implied volatility is a typical PDE inverse problem. In this paper, we propose the TV-L1 model for identifying the implied volatility. The optimal volatility function is found by minimizing the cost functional measuring the discrepancy. The gradient is computed via the adjoint method which provides us with an exact value of the gradient needed for the minimization procedure. We use the limited memory quasi-Newton algorithm (L-BFGS to find the optimal and numerical examples shows the effectiveness of the presented method.

  13. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  14. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced. The reconstruction algorithm is illustrated on various test cases including natural and urban terrain data, and enhancement oflow-resolution or aliased images. Copyright © by SIAM.

  15. A Unified View of Exact Continuous Penalties for l2-l0 Minimization

    OpenAIRE

    Soubies , Emmanuel; Blanc-Féraud , Laure; Aubert , Gilles

    2017-01-01

    International audience; Numerous nonconvex continuous penalties have been proposed to approach the l0 pseudo-norm for optimization purpose. Apart from the theoretical results for convex l1 relaxation under restrictive hypothesis, only few works have been devoted to analyze the consistency, in terms of minimizers, between the l0-regularized least square functional and relaxed ones using continuous approximations. In this context, two questions are of fundamental importance: does relaxed functi...

  16. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin; Guermond, Jean-Luc; Popov, Bojan

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced

  17. Improving oncoplastic breast tumor bed localization for radiotherapy planning using image registration algorithms

    Science.gov (United States)

    Wodzinski, Marek; Skalski, Andrzej; Ciepiela, Izabela; Kuszewski, Tomasz; Kedzierawski, Piotr; Gajda, Janusz

    2018-02-01

    Knowledge about tumor bed localization and its shape analysis is a crucial factor for preventing irradiation of healthy tissues during supportive radiotherapy and as a result, cancer recurrence. The localization process is especially hard for tumors placed nearby soft tissues, which undergo complex, nonrigid deformations. Among them, breast cancer can be considered as the most representative example. A natural approach to improving tumor bed localization is the use of image registration algorithms. However, this involves two unusual aspects which are not common in typical medical image registration: the real deformation field is discontinuous, and there is no direct correspondence between the cancer and its bed in the source and the target 3D images respectively. The tumor no longer exists during radiotherapy planning. Therefore, a traditional evaluation approach based on known, smooth deformations and target registration error are not directly applicable. In this work, we propose alternative artificial deformations which model the tumor bed creation process. We perform a comprehensive evaluation of the most commonly used deformable registration algorithms: B-Splines free form deformations (B-Splines FFD), different variants of the Demons and TV-L1 optical flow. The evaluation procedure includes quantitative assessment of the dedicated artificial deformations, target registration error calculation, 3D contour propagation and medical experts visual judgment. The results demonstrate that the currently, practically applied image registration (rigid registration and B-Splines FFD) are not able to correctly reconstruct discontinuous deformation fields. We show that the symmetric Demons provide the most accurate soft tissues alignment in terms of the ability to reconstruct the deformation field, target registration error and relative tumor volume change, while B-Splines FFD and TV-L1 optical flow are not an appropriate choice for the breast tumor bed localization problem

  18. Exploiting the Error-Correcting Capabilities of Low Density Parity Check Codes in Distributed Video Coding using Optical Flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo

    2012-01-01

    We consider Distributed Video Coding (DVC) in presence of communication errors. First, we present DVC side information generation based on a new method of optical flow driven frame interpolation, where a highly optimized TV-L1 algorithm is used for the flow calculations and combine three flows....... Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three...... flows are combined in one solution. The proposed frame interpolation method consistently outperforms an overlapped block motion compensation scheme and a previous TV-L1 optical flow frame interpolation method with an average PSNR improvement of 1.3 dB and 2.3 dB respectively. For a GOP size of 2...

  19. Algorithm for finding minimal cut sets in a fault tree

    International Nuclear Information System (INIS)

    Rosenberg, Ladislav

    1996-01-01

    This paper presents several algorithms that have been used in a computer code for fault-tree analysing by the minimal cut sets method. The main algorithm is the more efficient version of the new CARA algorithm, which finds minimal cut sets with an auxiliary dynamical structure. The presented algorithm for finding the minimal cut sets enables one to do so by defined requirements - according to the order of minimal cut sets, or to the number of minimal cut sets, or both. This algorithm is from three to six times faster when compared with the primary version of the CARA algorithm

  20. Enhanced spatial resolution in fluorescence molecular tomography using restarted L1-regularized nonlinear conjugate gradient algorithm.

    Science.gov (United States)

    Shi, Junwei; Liu, Fei; Zhang, Guanglei; Luo, Jianwen; Bai, Jing

    2014-04-01

    Owing to the high degree of scattering of light through tissues, the ill-posedness of fluorescence molecular tomography (FMT) inverse problem causes relatively low spatial resolution in the reconstruction results. Unlike L2 regularization, L1 regularization can preserve the details and reduce the noise effectively. Reconstruction is obtained through a restarted L1 regularization-based nonlinear conjugate gradient (re-L1-NCG) algorithm, which has been proven to be able to increase the computational speed with low memory consumption. The algorithm consists of inner and outer iterations. In the inner iteration, L1-NCG is used to obtain the L1-regularized results. In the outer iteration, the restarted strategy is used to increase the convergence speed of L1-NCG. To demonstrate the performance of re-L1-NCG in terms of spatial resolution, simulation and physical phantom studies with fluorescent targets located with different edge-to-edge distances were carried out. The reconstruction results show that the re-L1-NCG algorithm has the ability to resolve targets with an edge-to-edge distance of 0.1 cm at a depth of 1.5 cm, which is a significant improvement for FMT.

  1. Exact solutions of sl-boson system in U(2l + 1) reversible O(2l + 2) transitional region

    CERN Document Server

    Zhang Xin

    2002-01-01

    Exact eigen-energies and the corresponding wavefunctions of the interacting sl-boson system in U(2l + 1) reversible O(2l +2) transitional region are obtained by using an algebraic Bethe Ansatz with the infinite dimensional Lie algebraic technique. Numerical algorithm for solving the Bethe Ansatz equations by using mathematical package is also outlined

  2. Minimal Supersymmetric $SU(4) \\to SU(2)_L \\to SU(2)_R$

    CERN Document Server

    King, S F

    1998-01-01

    We present a minimal string-inspired supersymmetric $SU(4) \\times SU(2)_L potential in this model, based on a generalisation of that recently proposed by Dvali, Lazarides and Shafi. The model contains a global U(1) R-symmetry and reduces to the MSSM at low energies. However it improves on the MSSM since it explains the magnitude of its $\\mu$ term and gives a prediction for $\\tan \\beta both `cold' and `hot' dark matter candidates. A period of hybrid inflation above the symmetry breaking scale is also possible in this model. Finally it suggests the existence of `heavy' charge $\\pm e/6$ (colored) and $\\pm e/2$ (color singlet) states.

  3. Local Community Detection Algorithm Based on Minimal Cluster

    Directory of Open Access Journals (Sweden)

    Yong Zhou

    2016-01-01

    Full Text Available In order to discover the structure of local community more effectively, this paper puts forward a new local community detection algorithm based on minimal cluster. Most of the local community detection algorithms begin from one node. The agglomeration ability of a single node must be less than multiple nodes, so the beginning of the community extension of the algorithm in this paper is no longer from the initial node only but from a node cluster containing this initial node and nodes in the cluster are relatively densely connected with each other. The algorithm mainly includes two phases. First it detects the minimal cluster and then finds the local community extended from the minimal cluster. Experimental results show that the quality of the local community detected by our algorithm is much better than other algorithms no matter in real networks or in simulated networks.

  4. An algorithm for reduct cardinality minimization

    KAUST Repository

    AbouEisha, Hassan M.

    2013-12-01

    This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.

  5. An algorithm for reduct cardinality minimization

    KAUST Repository

    AbouEisha, Hassan M.; Al Farhan, Mohammed; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.

  6. L 1 Generalized Procrustes 2D Shape Alignment

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2008-01-01

    on the orientation of the coordinate system, i.e. it is not rotationally invariant. However, by simultaneously minimizing the city block distances in a series of rotated coordinate systems we are able to approximate the circular equidistance curves of Euclidean distances with a regular polygonal equidistance curve...... to the precision needed. Using 3 coordinate systems rotated 30 degrees we get a 12 sided regular polygon, with which we achieve deviations from Euclidean distances less than 2 % over all directions. This new formulation allows for minimization in the L1-norm using LP. We demonstrate that the use of the L1-norm...

  7. ILUCG algorithm which minimizes in the Euclidean norm

    International Nuclear Information System (INIS)

    Petravic, M.; Kuo-Petravic, G.

    1978-07-01

    An algroithm is presented which solves sparse systems of linear equations of the form Ax = Y, where A is non-symmetric, by the Incomplete LU Decomposition-Conjugate Gradient (ILUCG) method. The algorithm minimizes the error in the Euclidean norm vertical bar x/sub i/ - x vertical bar 2 , where x/sub i/ is the solution vector after the i/sup th/ iteration and x the exact solution vector. The results of a test on one real problem indicate that the algorithm is likely to be competitive with the best existing algorithms of its type

  8. A Fast Alternating Minimization Algorithm for Nonlocal Vectorial Total Variational Multichannel Image Denoising

    Directory of Open Access Journals (Sweden)

    Rubing Xi

    2014-01-01

    Full Text Available The variational models with nonlocal regularization offer superior image restoration quality over traditional method. But the processing speed remains a bottleneck due to the calculation quantity brought by the recent iterative algorithms. In this paper, a fast algorithm is proposed to restore the multichannel image in the presence of additive Gaussian noise by minimizing an energy function consisting of an l2-norm fidelity term and a nonlocal vectorial total variational regularization term. This algorithm is based on the variable splitting and penalty techniques in optimization. Following our previous work on the proof of the existence and the uniqueness of the solution of the model, we establish and prove the convergence properties of this algorithm, which are the finite convergence for some variables and the q-linear convergence for the rest. Experiments show that this model has a fabulous texture-preserving property in restoring color images. Both the theoretical derivation of the computation complexity analysis and the experimental results show that the proposed algorithm performs favorably in comparison to the widely used fixed point algorithm.

  9. On the Link Between L1-PCA and ICA.

    Science.gov (United States)

    Martin-Clemente, Ruben; Zarzoso, Vicente

    2017-03-01

    Principal component analysis (PCA) based on L1-norm maximization is an emerging technique that has drawn growing interest in the signal processing and machine learning research communities, especially due to its robustness to outliers. The present work proves that L1-norm PCA can perform independent component analysis (ICA) under the whitening assumption. However, when the source probability distributions fulfil certain conditions, the L1-norm criterion needs to be minimized rather than maximized, which can be accomplished by simple modifications on existing optimal algorithms for L1-PCA. If the sources have symmetric distributions, we show in addition that L1-PCA is linked to kurtosis optimization. A number of numerical experiments illustrate the theoretical results and analyze the comparative performance of different algorithms for ICA via L1-PCA. Although our analysis is asymptotic in the sample size, this equivalence opens interesting new perspectives for performing ICA using optimal algorithms for L1-PCA with guaranteed global convergence while inheriting the increased robustness to outliers of the L1-norm criterion.

  10. Passive shimming of a superconducting magnet using the L1-norm regularized least square algorithm.

    Science.gov (United States)

    Kong, Xia; Zhu, Minhua; Xia, Ling; Wang, Qiuliang; Li, Yi; Zhu, Xuchen; Liu, Feng; Crozier, Stuart

    2016-02-01

    The uniformity of the static magnetic field B0 is of prime importance for an MRI system. The passive shimming technique is usually applied to improve the uniformity of the static field by optimizing the layout of a series of steel shims. The steel pieces are fixed in the drawers in the inner bore of the superconducting magnet, and produce a magnetizing field in the imaging region to compensate for the inhomogeneity of the B0 field. In practice, the total mass of steel used for shimming should be minimized, in addition to the field uniformity requirement. This is because the presence of steel shims may introduce a thermal stability problem. The passive shimming procedure is typically realized using the linear programming (LP) method. The LP approach however, is generally slow and also has difficulty balancing the field quality and the total amount of steel for shimming. In this paper, we have developed a new algorithm that is better able to balance the dual constraints of field uniformity and the total mass of the shims. The least square method is used to minimize the magnetic field inhomogeneity over the imaging surface with the total mass of steel being controlled by an L1-norm based constraint. The proposed algorithm has been tested with practical field data, and the results show that, with similar computational cost and mass of shim material, the new algorithm achieves superior field uniformity (43% better for the test case) compared with the conventional linear programming approach. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Installation Restoration Program. Phase 2. Confirmation/Quantification. Stage 1. Reese Air Force Base, Lubbock, Texas. Volume 2. Appendices

    Science.gov (United States)

    1988-04-01

    12 XR7Jw - - DRILLNdL - SITE NAlE: SITE I.D: SITE LOCATION: 009 TDD #R6- DATE: ROLE DESIGNATION: .;] HOLE LOCATION: GROUND ELEV: TOP OF CASING FLEV...0 STAflDBY POWER IASB , 0 HP RP DESCRhIPTION VI~S~G ANDNELL~Rr~T~ T~V~L ~5~ .LLNGTh - IeA : - - - Flur27 AFF:m98 Well, Allt; ’PV$ - p I3 n I’I Figue...require a complete respect for safety by all team members to prevent injury or loss of life. 15.1 ORGANIZATION AND RESPONSIBILITIES There are eight roles

  12. TITRATION: A Randomized Study to Assess 2 Treatment Algorithms with New Insulin Glargine 300 units/mL.

    Science.gov (United States)

    Yale, Jean-François; Berard, Lori; Groleau, Mélanie; Javadi, Pasha; Stewart, John; Harris, Stewart B

    2017-10-01

    It was uncertain whether an algorithm that involves increasing insulin dosages by 1 unit/day may cause more hypoglycemia with the longer-acting insulin glargine 300 units/mL (GLA-300). The objective of this study was to compare safety and efficacy of 2 titration algorithms, INSIGHT and EDITION, for GLA-300 in people with uncontrolled type 2 diabetes mellitus, mainly in a primary care setting. This was a 12-week, open-label, randomized, multicentre pilot study. Participants were randomly assigned to 1 of 2 algorithms: they either increased their dosage by 1 unit/day (INSIGHT, n=108) or the dose was adjusted by the investigator at least once weekly, but no more often than every 3 days (EDITION, n=104). The target fasting self-monitored blood glucose was in the range of 4.4 to 5.6 mmol/L. The percentages of participants reaching the primary endpoint of fasting self-monitored blood glucose ≤5.6 mmol/L without nocturnal hypoglycemia were 19.4% (INSIGHT) and 18.3% (EDITION). At week 12, 26.9% (INSIGHT) and 28.8% (EDITION) of participants achieved a glycated hemoglobin value of ≤7%. No differences in the incidence of hypoglycemia of any category were noted between algorithms. Participants in both arms of the study were much more satisfied with their new treatment as assessed by the Diabetes Treatment Satisfaction Questionnaire. Most health-care professionals (86%) preferred the INSIGHT over the EDITION algorithm. The frequency of adverse events was similar between algorithms. A patient-driven titration algorithm of 1 unit/day with GLA-300 is effective and comparable to the previously tested EDITION algorithm and is preferred by health-care professionals. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.

  13. Wavelet Decomposition Method for $L_2/$/TV-Image Deblurring

    KAUST Repository

    Fornasier, M.

    2012-07-17

    In this paper, we show additional properties of the limit of a sequence produced by the subspace correction algorithm proposed by Fornasier and Schönlieb [SIAM J. Numer. Anal., 47 (2009), pp. 3397-3428 for L 2/TV-minimization problems. An important but missing property of such a limiting sequence in that paper is the convergence to a minimizer of the original minimization problem, which was obtained in [M. Fornasier, A. Langer, and C.-B. Schönlieb, Numer. Math., 116 (2010), pp. 645-685 with an additional condition of overlapping subdomains. We can now determine when the limit is indeed a minimizer of the original problem. Inspired by the work of Vonesch and Unser [IEEE Trans. Image Process., 18 (2009), pp. 509-523], we adapt and specify this algorithm to the case of an orthogonal wavelet space decomposition for deblurring problems and provide an equivalence condition to the convergence of such a limiting sequence to a minimizer. We also provide a counterexample of a limiting sequence by the algorithm that does not converge to a minimizer, which shows the necessity of our analysis of the minimizing algorithm. © 2012 Society for Industrial and Applied Mathematics.

  14. A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models

    Science.gov (United States)

    Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng

    2012-09-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.

  15. A proximity algorithm accelerated by Gauss–Seidel iterations for L1/TV denoising models

    International Nuclear Information System (INIS)

    Li, Qia; Shen, Lixin; Xu, Yuesheng; Micchelli, Charles A

    2012-01-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss–Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed. (paper)

  16. Sensitivity computation of the l1 minimization problem and its application to dictionary design of ill-posed problems

    International Nuclear Information System (INIS)

    Horesh, L; Haber, E

    2009-01-01

    The l 1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging

  17. Tydskrif vir letterkunde - Vol 36, No 1-2 (2002)

    African Journals Online (AJOL)

    Evaluating the role of Adult Based Education and Training (ABET), in terms of fulfilling the need for literacy in English, in the private sector · EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. B Vivian. http://dx.doi.org/10.4314/tvl.v36i1-2.53818 ...

  18. Wavelet Decomposition Method for $L_2/$/TV-Image Deblurring

    KAUST Repository

    Fornasier, M.; Kim, Y.; Langer, A.; Schö nlieb, C.-B.

    2012-01-01

    In this paper, we show additional properties of the limit of a sequence produced by the subspace correction algorithm proposed by Fornasier and Schönlieb [SIAM J. Numer. Anal., 47 (2009), pp. 3397-3428 for L 2/TV-minimization problems. An important

  19. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  20. Adaptive L1/2 Shooting Regularization Method for Survival Analysis Using Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Xiao-Ying Liu

    2013-01-01

    Full Text Available A new adaptive L1/2 shooting regularization method for variable selection based on the Cox’s proportional hazards mode being proposed. This adaptive L1/2 shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L1 penalties and a shooting strategy of L1/2 penalty. Simulation results based on high dimensional artificial data show that the adaptive L1/2 shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL also indicate that the L1/2 regularization method performs competitively.

  1. A TVSCAD approach for image deblurring with impulsive noise

    Science.gov (United States)

    Gu, Guoyong; Jiang, Suhong; Yang, Junfeng

    2017-12-01

    We consider image deblurring problem in the presence of impulsive noise. It is known that total variation (TV) regularization with L1-norm penalized data fitting (TVL1 for short) works reasonably well only when the level of impulsive noise is relatively low. For high level impulsive noise, TVL1 works poorly. The reason is that all data, both corrupted and noise free, are equally penalized in data fitting, leading to insurmountable difficulty in balancing regularization and data fitting. In this paper, we propose to combine TV regularization with nonconvex smoothly clipped absolute deviation (SCAD) penalty for data fitting (TVSCAD for short). Our motivation is simply that data fitting should be enforced only when an observed data is not severely corrupted, while for those data more likely to be severely corrupted, less or even null penalization should be enforced. A difference of convex functions algorithm is adopted to solve the nonconvex TVSCAD model, resulting in solving a sequence of TVL1-equivalent problems, each of which can then be solved efficiently by the alternating direction method of multipliers. Theoretically, we establish global convergence to a critical point of the nonconvex objective function. The R-linear and at-least-sublinear convergence rate results are derived for the cases of anisotropic and isotropic TV, respectively. Numerically, experimental results are given to show that the TVSCAD approach improves those of the TVL1 significantly, especially for cases with high level impulsive noise, and is comparable with the recently proposed iteratively corrected TVL1 method (Bai et al 2016 Inverse Problems 32 085004).

  2. Blind spectrum reconstruction algorithm with L0-sparse representation

    International Nuclear Information System (INIS)

    Liu, Hai; Zhang, Zhaoli; Liu, Sanyan; Shu, Jiangbo; Liu, Tingting; Zhang, Tianxu

    2015-01-01

    Raman spectrum often suffers from band overlap and Poisson noise. This paper presents a new blind Poissonian Raman spectrum reconstruction method, which incorporates the L 0 -sparse prior together with the total variation constraint into the maximum a posteriori framework. Furthermore, the greedy analysis pursuit algorithm is adopted to solve the L 0 -based minimization problem. Simulated and real spectrum experimental results show that the proposed method can effectively preserve spectral structure and suppress noise. The reconstructed Raman spectra are easily used for interpreting unknown chemical mixtures. (paper)

  3. Predictions for the neutrino parameters in the minimal gauged U(1){sub L{sub μ-L{sub τ}}} model

    Energy Technology Data Exchange (ETDEWEB)

    Asai, Kento; Nagata, Natsumi [University of Tokyo, Department of Physics, Bunkyo-ku, Tokyo (Japan); Hamaguchi, Koichi [University of Tokyo, Department of Physics, Bunkyo-ku, Tokyo (Japan); University of Tokyo, Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), Kashiwa (Japan)

    2017-11-15

    We study the structure of the neutrino-mass matrix in the minimal gauged U(1){sub L{sub μ-L{sub τ}}} model, where three right-handed neutrinos are added to the Standard Model in order to obtain non-zero masses for the active neutrinos. Because of the U(1){sub L{sub μ-L{sub τ}}} gauge symmetry, the structure of both Dirac and Majorana mass terms of neutrinos is tightly restricted. In particular, the inverse of the neutrino-mass matrix has zeros in the (μ,μ) and (τ,τ) components, namely, this model offers a symmetric realization of the so-called two-zero-minor structure in the neutrino-mass matrix. Due to these constraints, all the CP phases - the Dirac CP phase δ and the Majorana CP phases α{sub 2} and α{sub 3} - as well as the mass eigenvalues of the light neutrinos m{sub i} are uniquely determined as functions of the neutrino mixing angles θ{sub 12}, θ{sub 23}, and θ{sub 13}, and the squared mass differences Δm{sub 21}{sup 2} and Δm{sub 31}{sup 2}. We find that this model predicts the Dirac CP phase δ to be δ ≅ 1.59π-1.70π (1.54π-1.78π), the sum of the neutrino masses to be sum {sub i}m{sub i} ≅ 0.14-0.22 eV (0.12-0.40 eV), and the effective mass for the neutrinoless double-beta decay to be left angle m{sub ββ} right angle ≅ 0.024-0.055 eV (0.017-0.12 eV) at 1σ (2σ) level, which are totally consistent with the current experimental limits. These predictions can soon be tested in future neutrino experiments. Implications for leptogenesis are also discussed. (orig.)

  4. A new recursive incremental algorithm for building minimal acyclic deterministic finite automata

    NARCIS (Netherlands)

    Watson, B.W.; Martin-Vide, C.; Mitrana, V.

    2003-01-01

    This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is

  5. The minimally invasive spinal deformity surgery algorithm: a reproducible rational framework for decision making in minimally invasive spinal deformity surgery.

    Science.gov (United States)

    Mummaneni, Praveen V; Shaffrey, Christopher I; Lenke, Lawrence G; Park, Paul; Wang, Michael Y; La Marca, Frank; Smith, Justin S; Mundis, Gregory M; Okonkwo, David O; Moal, Bertrand; Fessler, Richard G; Anand, Neel; Uribe, Juan S; Kanter, Adam S; Akbarnia, Behrooz; Fu, Kai-Ming G

    2014-05-01

    Minimally invasive surgery (MIS) is an alternative to open deformity surgery for the treatment of patients with adult spinal deformity. However, at this time MIS techniques are not as versatile as open deformity techniques, and MIS techniques have been reported to result in suboptimal sagittal plane correction or pseudarthrosis when used for severe deformities. The minimally invasive spinal deformity surgery (MISDEF) algorithm was created to provide a framework for rational decision making for surgeons who are considering MIS versus open spine surgery. A team of experienced spinal deformity surgeons developed the MISDEF algorithm that incorporates a patient's preoperative radiographic parameters and leads to one of 3 general plans ranging from MIS direct or indirect decompression to open deformity surgery with osteotomies. The authors surveyed fellowship-trained spine surgeons experienced with spinal deformity surgery to validate the algorithm using a set of 20 cases to establish interobserver reliability. They then resurveyed the same surgeons 2 months later with the same cases presented in a different sequence to establish intraobserver reliability. Responses were collected and tabulated. Fleiss' analysis was performed using MATLAB software. Over a 3-month period, 11 surgeons completed the surveys. Responses for MISDEF algorithm case review demonstrated an interobserver kappa of 0.58 for the first round of surveys and an interobserver kappa of 0.69 for the second round of surveys, consistent with substantial agreement. In at least 10 cases there was perfect agreement between the reviewing surgeons. The mean intraobserver kappa for the 2 surveys was 0.86 ± 0.15 (± SD) and ranged from 0.62 to 1. The use of the MISDEF algorithm provides consistent and straightforward guidance for surgeons who are considering either an MIS or an open approach for the treatment of patients with adult spinal deformity. The MISDEF algorithm was found to have substantial inter- and

  6. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    Science.gov (United States)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  7. l=1,2 high-beta stellarator

    International Nuclear Information System (INIS)

    Bartsch, R.R.; Cantrell, E.L.; Gribble, R.F.; Klare, K.A.; Kutac, K.J.; Miller, G.; Siemon, R.E.

    1978-01-01

    The final scyllac experiments are described. These experiments utilized a feedback-stabilized, l=1,2 high-beta stellarator configuration and like the previous feedback-stabilization experiments were carried out in a toroidal sector, rather than a complete torus. The energy confinement time, obtained from excluded flux measurements, agrees with a two-dimensional calculation of particle end loss from a straight theta pinch. Because simple end loss was dominant, the energy confinement time was independent of whether equilibrium adjustment or feedback stabilization fields were applied. The dynamical characteristics of the toroidal equilibrium were improved by elimination of the l=0 field used previously, as expected from theory. A modal rather than local feedback control algorithm was used. Although feedback clearly decreased the m=1 motion of the plasma, the experimental test of modal feedback, which is expected from theory to be superior to local feedback, is considered inconclusive because of the limitations imposed by the sector configuration

  8. A constrained optimization algorithm for total energy minimization in electronic structure calculations

    International Nuclear Information System (INIS)

    Yang Chao; Meza, Juan C.; Wang Linwang

    2006-01-01

    A new direct constrained optimization algorithm for minimizing the Kohn-Sham (KS) total energy functional is presented in this paper. The key ingredients of this algorithm involve projecting the total energy functional into a sequence of subspaces of small dimensions and seeking the minimizer of total energy functional within each subspace. The minimizer of a subspace energy functional not only provides a search direction along which the KS total energy functional decreases but also gives an optimal 'step-length' to move along this search direction. Numerical examples are provided to demonstrate that this new direct constrained optimization algorithm can be more efficient than the self-consistent field (SCF) iteration

  9. Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning

    International Nuclear Information System (INIS)

    Kim, Hojin; Becker, Stephen; Lee, Rena; Lee, Soonhyouk; Shin, Sukyoung; Candès, Emmanuel; Xing Lei; Li Ruijiang

    2013-01-01

    Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of the objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments

  10. Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hojin [Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 and Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States); Becker, Stephen [Laboratoire Jacques-Louis Lions, Universite Pierre et Marie Curie, Paris 6, 75005 France (France); Lee, Rena; Lee, Soonhyouk [Department of Radiation Oncology, School of Medicine, Ewha Womans University, Seoul 158-710 (Korea, Republic of); Shin, Sukyoung [Medtronic CV RDN R and D, Santa Rosa, California 95403 (United States); Candes, Emmanuel [Department of Statistics, Stanford University, Stanford, California 94305-4065 (United States); Xing Lei; Li Ruijiang [Department of Radiation Oncology, Stanford University, Stanford, California 94305-5304 (United States)

    2013-07-15

    Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of the objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments

  11. An Algorithm for Determining Minimal Reduced—Coverings of Acyclic Database Schemes

    Institute of Scientific and Technical Information of China (English)

    刘铁英; 叶新铭

    1996-01-01

    This paper reports an algoritm(DTV)for deermining the minimal reducedcovering of an acyclic database scheme over a specified subset of attributes.The output of this algotithm contains not only minimum number of attributes but also minimum number of partial relation schemes.The algorithm has complexity O(|N|·|E|2),where|N| is the number of attributes and |E|the number of relation schemes.It is also proved that for Berge,γ or β acyclic database schemes,the output of algorithm DTV maintains the acyclicity correspondence.

  12. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Directory of Open Access Journals (Sweden)

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  13. Soil Moisture Active Passive (SMAP) Project Algorithm Theoretical Basis Document SMAP L1B Radiometer Data Product: L1B_TB

    Science.gov (United States)

    Piepmeier, Jeffrey; Mohammed, Priscilla; De Amici, Giovanni; Kim, Edward; Peng, Jinzheng; Ruf, Christopher; Hanna, Maher; Yueh, Simon; Entekhabi, Dara

    2016-01-01

    The purpose of the Soil Moisture Active Passive (SMAP) radiometer calibration algorithm is to convert Level 0 (L0) radiometer digital counts data into calibrated estimates of brightness temperatures referenced to the Earth's surface within the main beam. The algorithm theory in most respects is similar to what has been developed and implemented for decades for other satellite radiometers; however, SMAP includes two key features heretofore absent from most satellite borne radiometers: radio frequency interference (RFI) detection and mitigation, and measurement of the third and fourth Stokes parameters using digital correlation. The purpose of this document is to describe the SMAP radiometer and forward model, explain the SMAP calibration algorithm, including approximations, errors, and biases, provide all necessary equations for implementing the calibration algorithm and detail the RFI detection and mitigation process. Section 2 provides a summary of algorithm objectives and driving requirements. Section 3 is a description of the instrument and Section 4 covers the forward models, upon which the algorithm is based. Section 5 gives the retrieval algorithm and theory. Section 6 describes the orbit simulator, which implements the forward model and is the key for deriving antenna pattern correction coefficients and testing the overall algorithm.

  14. Orthography-Induced Length Contrasts in the Second Language Phonological Systems of L2 Speakers of English: Evidence from Minimal Pairs.

    Science.gov (United States)

    Bassetti, Bene; Sokolović-Perović, Mirjana; Mairano, Paolo; Cerni, Tania

    2018-06-01

    Research shows that the orthographic forms ("spellings") of second language (L2) words affect speech production in L2 speakers. This study investigated whether English orthographic forms lead L2 speakers to produce English homophonic word pairs as phonological minimal pairs. Targets were 33 orthographic minimal pairs, that is to say homophonic words that would be pronounced as phonological minimal pairs if orthography affects pronunciation. Word pairs contained the same target sound spelled with one letter or two, such as the /n/ in finish and Finnish (both /'fɪnɪʃ/ in Standard British English). To test for effects of length and type of L2 exposure, we compared Italian instructed learners of English, Italian-English late bilinguals with lengthy naturalistic exposure, and English natives. A reading-aloud task revealed that Italian speakers of English L2 produce two English homophonic words as a minimal pair distinguished by different consonant or vowel length, for instance producing the target /'fɪnɪʃ/ with a short [n] or a long [nː] to reflect the number of consonant letters in the spelling of the words finish and Finnish. Similar effects were found on the pronunciation of vowels, for instance in the orthographic pair scene-seen (both /siːn/). Naturalistic exposure did not reduce orthographic effects, as effects were found both in learners and in late bilinguals living in an English-speaking environment. It appears that the orthographic form of L2 words can result in the establishment of a phonological contrast that does not exist in the target language. Results have implications for models of L2 phonological development.

  15. Minimal algorithm for running an internal combustion engine

    Science.gov (United States)

    Stoica, V.; Borborean, A.; Ciocan, A.; Manciu, C.

    2018-01-01

    The internal combustion engine control is a well-known topic within automotive industry and is widely used. However, in research laboratories and universities the use of a control system trading is not the best solution because of predetermined operating algorithms, and calibrations (accessible only by the manufacturer) without allowing massive intervention from outside. Laboratory solutions on the market are very expensive. Consequently, in the paper we present a minimal algorithm required to start-up and run an internal combustion engine. The presented solution can be adapted to function on performance microcontrollers available on the market at the present time and at an affordable price. The presented algorithm was implemented in LabView and runs on a CompactRIO hardware platform.

  16. l1- and l2-Norm Joint Regularization Based Sparse Signal Reconstruction Scheme

    Directory of Open Access Journals (Sweden)

    Chanzi Liu

    2016-01-01

    Full Text Available Many problems in signal processing and statistical inference involve finding sparse solution to some underdetermined linear system of equations. This is also the application condition of compressive sensing (CS which can find the sparse solution from the measurements far less than the original signal. In this paper, we propose l1- and l2-norm joint regularization based reconstruction framework to approach the original l0-norm based sparseness-inducing constrained sparse signal reconstruction problem. Firstly, it is shown that, by employing the simple conjugate gradient algorithm, the new formulation provides an effective framework to deduce the solution as the original sparse signal reconstruction problem with l0-norm regularization item. Secondly, the upper reconstruction error limit is presented for the proposed sparse signal reconstruction framework, and it is unveiled that a smaller reconstruction error than l1-norm relaxation approaches can be realized by using the proposed scheme in most cases. Finally, simulation results are presented to validate the proposed sparse signal reconstruction approach.

  17. Computational acceleration for MR image reconstruction in partially parallel imaging.

    Science.gov (United States)

    Ye, Xiaojing; Chen, Yunmei; Huang, Feng

    2011-05-01

    In this paper, we present a fast numerical algorithm for solving total variation and l(1) (TVL1) based image reconstruction with application in partially parallel magnetic resonance imaging. Our algorithm uses variable splitting method to reduce computational cost. Moreover, the Barzilai-Borwein step size selection method is adopted in our algorithm for much faster convergence. Experimental results on clinical partially parallel imaging data demonstrate that the proposed algorithm requires much fewer iterations and/or less computational cost than recently developed operator splitting and Bregman operator splitting methods, which can deal with a general sensing matrix in reconstruction framework, to get similar or even better quality of reconstructed images.

  18. How Does L1 and L2 Exposure Impact L1 Performance in Bilingual Children? Evidence from Polish-English Migrants to the United Kingdom

    Directory of Open Access Journals (Sweden)

    Ewa Haman

    2017-09-01

    Full Text Available Most studies on bilingual language development focus on children’s second language (L2. Here, we investigated first language (L1 development of Polish-English early migrant bilinguals in four domains: vocabulary, grammar, phonological processing, and discourse. We first compared Polish language skills between bilinguals and their Polish non-migrant monolingual peers, and then investigated the influence of the cumulative exposure to L1 and L2 on bilinguals’ performance. We then examined whether high exposure to L1 could possibly minimize the gap between monolinguals and bilinguals. We analyzed data from 233 typically developing children (88 bilingual and 145 monolingual aged 4;0 to 7;5 (years;months on six language measures in Polish: receptive vocabulary, productive vocabulary, receptive grammar, productive grammar (sentence repetition, phonological processing (non-word repetition, and discourse abilities (narration. Information about language exposure was obtained via parental questionnaires. For each language task, we analyzed the data from the subsample of bilinguals who had completed all the tasks in question and from monolinguals matched one-on-one to the bilingual group on age, SES (measured by years of mother’s education, gender, non-verbal IQ, and short-term memory. The bilingual children scored lower than monolinguals in all language domains, except discourse. The group differences were more pronounced on the productive tasks (vocabulary, grammar, and phonological processing and moderate on the receptive tasks (vocabulary and grammar. L1 exposure correlated positively with the vocabulary size and phonological processing. Grammar scores were not related to the levels of L1 exposure, but were predicted by general cognitive abilities. L2 exposure negatively influenced productive grammar in L1, suggesting possible L2 transfer effects on L1 grammatical performance. Children’s narrative skills benefitted from exposure to two languages

  19. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  20. Geração de Isolinhas, com dados obtidos por levantamento GPS/L1L2, mediante a técnica de Redes Neurais Artificiais = Generation of Isolines using GPS/L1L2 data and Artificial Neural Network technique

    Directory of Open Access Journals (Sweden)

    Elaine Cristine Barros de Souza

    2006-07-01

    Full Text Available O objetivo deste trabalho é mostrar uma alternativa para o processo deinterpolação empregado em Modelagem Digital do Terreno. Utilizando-se dados de levantamentos GPS coletados por meio de posicionamento relativo cinemático realizou-se um processo de interpolação por meio de Redes Neurais Artificiais (RNA. Os resultados foram comparados com os dados obtidos usando-se o algoritmo de interpolação Inverso doQuadrado de uma Distância (IQD para avaliar o comportamento das grades geradas por meio de análises qualitativas (isolinhas e MDT e quantitativas (resíduos. Conclui-se que o método testado é viável se comparado com o algoritmo IQD e, portanto, o uso de RNA apresenta-se adequado para a interpolação de dados GPS provenientes do processamento de fase das portadoras L1 e L2.The objective of this work is to show an alternative for theinterpolation process used in Digital Terrain Modeling. GPS surveying data of a kinematic relative positioning were interpolated by means of Artificial Neural Networks (ANN algorithm. The performance of the generated gridding was compared to the Inverse Square Distance interpolation algorithm data by means of qualitative (isolines and DTM and quantitative (residuals analyses. In conclusion, the tested method is viable if compared to the algorithm Inverse of the Square of a Distance. Therefore, the use of ANN is suitable for the interpolation of GPS data obtained from the L1 and L2 carriers phase processing.

  1. An improved algorithm for finding all minimal paths in a network

    International Nuclear Information System (INIS)

    Bai, Guanghan; Tian, Zhigang; Zuo, Ming J.

    2016-01-01

    Minimal paths (MPs) play an important role in network reliability evaluation. In this paper, we report an efficient recursive algorithm for finding all MPs in two-terminal networks, which consist of a source node and a sink node. A linked path structure indexed by nodes is introduced, which accepts both directed and undirected form of networks. The distance between each node and the sink node is defined, and a simple recursive algorithm is presented for labeling the distance for each node. Based on the distance between each node and the sink node, additional conditions for backtracking are incorporated to reduce the number of search branches. With the newly introduced linked node structure, the distances between each node and the sink node, and the additional backtracking conditions, an improved backtracking algorithm for searching for all MPs is developed. In addition, the proposed algorithm can be adapted to search for all minimal paths for each source–sink pair in networks consisting of multiple source nodes and/or multiple sink nodes. Through computational experiments, it is demonstrated that the proposed algorithm is more efficient than existing algorithms when the network size is not too small. The proposed algorithm becomes more advantageous as the size of the network grows. - Highlights: • A linked path structure indexed by nodes is introduced to represent networks. • Additional conditions for backtracking are proposed based on the distance of each node. • An efficient algorithm is developed to find all MPs for two-terminal networks. • The computational efficiency of the algorithm for two-terminal networks is investigated. • The computational efficiency of the algorithm for multi-terminal networks is investigated.

  2. Parameter-free Network Sparsification and Data Reduction by Minimal Algorithmic Information Loss

    KAUST Repository

    Zenil, Hector

    2018-02-16

    The study of large and complex datasets, or big data, organized as networks has emerged as one of the central challenges in most areas of science and technology. Cellular and molecular networks in biology is one of the prime examples. Henceforth, a number of techniques for data dimensionality reduction, especially in the context of networks, have been developed. Yet, current techniques require a predefined metric upon which to minimize the data size. Here we introduce a family of parameter-free algorithms based on (algorithmic) information theory that are designed to minimize the loss of any (enumerable computable) property contributing to the object\\'s algorithmic content and thus important to preserve in a process of data dimension reduction when forcing the algorithm to delete first the least important features. Being independent of any particular criterion, they are universal in a fundamental mathematical sense. Using suboptimal approximations of efficient (polynomial) estimations we demonstrate how to preserve network properties outperforming other (leading) algorithms for network dimension reduction. Our method preserves all graph-theoretic indices measured, ranging from degree distribution, clustering-coefficient, edge betweenness, and degree and eigenvector centralities. We conclude and demonstrate numerically that our parameter-free, Minimal Information Loss Sparsification (MILS) method is robust, has the potential to maximize the preservation of all recursively enumerable features in data and networks, and achieves equal to significantly better results than other data reduction and network sparsification methods.

  3. Do L2 Writing Courses Affect the Improvement of L1 Writing Skills via Skills Transfer from L2 to L1?

    Science.gov (United States)

    Gonca, Altmisdort

    2016-01-01

    This study investigates the relationship of second language (L2) writing skills proficiency with the first language (L1) writing skills, in light of the language transfer. The study aims to analyze the positive effects of L2 writing proficiency on L1 writing proficiency. Forty native Turkish-speaking university students participated in the study.…

  4. Sp1 and CREB regulate basal transcription of the human SNF2L gene

    International Nuclear Information System (INIS)

    Xia Yu; Jiang Baichun; Zou Yongxin; Gao Guimin; Shang Linshan; Chen Bingxi; Liu Qiji; Gong Yaoqin

    2008-01-01

    Imitation Switch (ISWI) is a member of the SWI2/SNF2 superfamily of ATP-dependent chromatin remodelers, which are involved in multiple nuclear functions, including transcriptional regulation, replication, and chromatin assembly. Mammalian genomes encode two ISWI orthologs, SNF2H and SNF2L. In order to clarify the molecular mechanisms governing the expression of human SNF2L gene, we functionally examined the transcriptional regulation of human SNF2L promoter. Reporter gene assays demonstrated that the minimal SNF2L promoter was located between positions -152 to -86 relative to the transcription start site. In this region we have identified a cAMP-response element (CRE) located at -99 to -92 and a Sp1-binding site at -145 to -135 that play a critical role in regulating basal activity of human SNF2L gene, which were proven by deletion and mutation of specific binding sites, EMSA, and down-regulating Sp1 and CREB via RNAi. This study provides the first insight into the mechanisms that control basal expression of human SNF2L gene

  5. Tydskrif vir letterkunde - Vol 48, No 1 (2011)

    African Journals Online (AJOL)

    Endogenous and exogenous factors in national development: inferences from the metaphor of witchcraft (Àjé) in Olátúbòsún O.ládàpò.'s poetry. EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. GO Ajibade. http://dx.doi.org/10.4314/tvl.v48i1.63828 ...

  6. L1 and L2 Distance Effects in Learning L3 Dutch

    Science.gov (United States)

    Schepens, Job J.; der Slik, Frans; Hout, Roeland

    2016-01-01

    Many people speak more than two languages. How do languages acquired earlier affect the learnability of additional languages? We show that linguistic distances between speakers' first (L1) and second (L2) languages and their third (L3) language play a role. Larger distances from the L1 to the L3 and from the L2 to the L3 correlate with lower…

  7. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yunlong; Wang, Aiping; Guo, Lei; Wang, Hong

    2017-07-09

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  8. Firefly algorithm based solution to minimize the real power loss in a power system

    Directory of Open Access Journals (Sweden)

    P. Balachennaiah

    2018-03-01

    Full Text Available This paper proposes a method to minimize the real power loss (RPL of a power system transmission network using a new meta-heuristic algorithm known as firefly algorithm (FA by optimizing the control variables such as transformer taps, UPFC location and UPFC series injected voltage magnitude and phase angle. A software program is developed in MATLAB environment for FA to minimize the RPL by optimizing (i only the transformer tap values, (ii only UPFC location and its variables with optimized tap values and (iii UPFC location and its variables along with transformer tap setting values simultaneously. Interior point successive linear programming (IPSLP technique and real coded genetic algorithm (RCGA are considered here to compare the results and to show the efficiency and superiority of the proposed FA towards the optimization of RPL. Also in this paper, bacteria foraging algorithm (BFA is adopted to validate the results of the proposed algorithm.

  9. A fast algorithm for identifying friends-of-friends halos

    Science.gov (United States)

    Feng, Y.; Modi, C.

    2017-07-01

    We describe a simple and fast algorithm for identifying friends-of-friends features and prove its correctness. The algorithm avoids unnecessary expensive neighbor queries, uses minimal memory overhead, and rejects slowdown in high over-density regions. We define our algorithm formally based on pair enumeration, a problem that has been heavily studied in fast 2-point correlation codes and our reference implementation employs a dual KD-tree correlation function code. We construct features in a hierarchical tree structure, and use a splay operation to reduce the average cost of identifying the root of a feature from O [ log L ] to O [ 1 ] (L is the size of a feature) without additional memory costs. This reduces the overall time complexity of merging trees from O [ L log L ] to O [ L ] , reducing the number of operations per splay by orders of magnitude. We next introduce a pruning operation that skips merge operations between two fully self-connected KD-tree nodes. This improves the robustness of the algorithm, reducing the number of merge operations in high density peaks from O [δ2 ] to O [ δ ] . We show that for cosmological data set the algorithm eliminates more than half of merge operations for typically used linking lengths b ∼ 0 . 2 (relative to mean separation). Furthermore, our algorithm is extremely simple and easy to implement on top of an existing pair enumeration code, reusing the optimization effort that has been invested in fast correlation function codes.

  10. Worm Algorithm for CP(N-1) Model

    CERN Document Server

    Rindlisbacher, Tobias

    2017-01-01

    The CP(N-1) model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CP(N-1) on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CP(N-1) model has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CP(N-1) model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CP(N-1) l...

  11. Perturbation of convex risk minimization and its application in differential private learning algorithms

    Directory of Open Access Journals (Sweden)

    Weilin Nie

    2017-01-01

    Full Text Available Abstract Convex risk minimization is a commonly used setting in learning theory. In this paper, we firstly give a perturbation analysis for such algorithms, and then we apply this result to differential private learning algorithms. Our analysis needs the objective functions to be strongly convex. This leads to an extension of our previous analysis to the non-differentiable loss functions, when constructing differential private algorithms. Finally, an error analysis is then provided to show the selection for the parameters.

  12. Overhead-Aware-Best-Fit (OABF) Resource Allocation Algorithm for Minimizing VM Launching Overhead

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Hao [IIT; Garzoglio, Gabriele [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Noh, Seo Young [KISTI, Daejeon

    2014-11-11

    FermiCloud is a private cloud developed in Fermi National Accelerator Laboratory to provide elastic and on-demand resources for different scientific research experiments. The design goal of the FermiCloud is to automatically allocate resources for different scientific applications so that the QoS required by these applications is met and the operational cost of the FermiCloud is minimized. Our earlier research shows that VM launching overhead has large variations. If such variations are not taken into consideration when making resource allocation decisions, it may lead to poor performance and resource waste. In this paper, we show how we may use an VM launching overhead reference model to minimize VM launching overhead. In particular, we first present a training algorithm that automatically tunes a given refer- ence model to accurately reflect FermiCloud environment. Based on the tuned reference model for virtual machine launching overhead, we develop an overhead-aware-best-fit resource allocation algorithm that decides where and when to allocate resources so that the average virtual machine launching overhead is minimized. The experimental results indicate that the developed overhead-aware-best-fit resource allocation algorithm can significantly improved the VM launching time when large number of VMs are simultaneously launched.

  13. Determining the Minimal Required Radioactivity of 18F-FDG for Reliable Semiquantification in PET/CT Imaging: A Phantom Study.

    Science.gov (United States)

    Chen, Ming-Kai; Menard, David H; Cheng, David W

    2016-03-01

    In pursuit of as-low-as-reasonably-achievable (ALARA) doses, this study investigated the minimal required radioactivity and corresponding imaging time for reliable semiquantification in PET/CT imaging. Using a phantom containing spheres of various diameters (3.4, 2.1, 1.5, 1.2, and 1.0 cm) filled with a fixed (18)F-FDG concentration of 165 kBq/mL and a background concentration of 23.3 kBq/mL, we performed PET/CT at multiple time points over 20 h of radioactive decay. The images were acquired for 10 min at a single bed position for each of 10 half-lives of decay using 3-dimensional list mode and were reconstructed into 1-, 2-, 3-, 4-, 5-, and 10-min acquisitions per bed position using an ordered-subsets expectation maximum algorithm with 24 subsets and 2 iterations and a gaussian 2-mm filter. SUVmax and SUVavg were measured for each sphere. The minimal required activity (±10%) for precise SUVmax semiquantification in the spheres was 1.8 kBq/mL for an acquisition of 10 min, 3.7 kBq/mL for 3-5 min, 7.9 kBq/mL for 2 min, and 17.4 kBq/mL for 1 min. The minimal required activity concentration-acquisition time product per bed position was 10-15 kBq/mL⋅min for reproducible SUV measurements within the spheres without overestimation. Using the total radioactivity and counting rate from the entire phantom, we found that the minimal required total activity-time product was 17 MBq⋅min and the minimal required counting rate-time product was 100 kcps⋅min. Our phantom study determined a threshold for minimal radioactivity and acquisition time for precise semiquantification in (18)F-FDG PET imaging that can serve as a guide in pursuit of achieving ALARA doses. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  14. MOCUS, Minimal Cut Sets and Minimal Path Sets from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.; Henry, E.B.; Marshall, N.H.

    1976-01-01

    1 - Description of problem or function: From a description of the Boolean failure logic of a system, called a fault tree, and control parameters specifying the minimal cut set length to be obtained MOCUS determines the system failure modes, or minimal cut sets, and the system success modes, or minimal path sets. 2 - Method of solution: MOCUS uses direct resolution of the fault tree into the cut and path sets. The algorithm used starts with the main failure of interest, the top event, and proceeds to basic independent component failures, called primary events, to resolve the fault tree to obtain the minimal sets. A key point of the algorithm is that an and gate alone always increases the number of path sets; an or gate alone always increases the number of cut sets and increases the size of path sets. Other types of logic gates must be described in terms of and and or logic gates. 3 - Restrictions on the complexity of the problem: Output from MOCUS can include minimal cut and path sets for up to 20 gates

  15. A novel particle swarm optimization algorithm for permutation flow-shop scheduling to minimize makespan

    International Nuclear Information System (INIS)

    Lian Zhigang; Gu Xingsheng; Jiao Bin

    2008-01-01

    It is well known that the flow-shop scheduling problem (FSSP) is a branch of production scheduling and is NP-hard. Now, many different approaches have been applied for permutation flow-shop scheduling to minimize makespan, but current algorithms even for moderate size problems cannot be solved to guarantee optimality. Some literatures searching PSO for continuous optimization problems are reported, but papers searching PSO for discrete scheduling problems are few. In this paper, according to the discrete characteristic of FSSP, a novel particle swarm optimization (NPSO) algorithm is presented and successfully applied to permutation flow-shop scheduling to minimize makespan. Computation experiments of seven representative instances (Taillard) based on practical data were made, and comparing the NPSO with standard GA, we obtain that the NPSO is clearly more efficacious than standard GA for FSSP to minimize makespan

  16. Minimal unitary representation of D(2,1;λ) and its SU(2) deformations and d=1, N=4 superconformal models

    International Nuclear Information System (INIS)

    Govil, Karan; Gunaydin, Murat

    2013-01-01

    Quantization of the geometric quasiconformal realizations of noncompact groups and supergroups leads directly to their minimal unitary representations (minreps). Using quasiconformal methods massless unitary supermultiplets of superconformal groups SU(2,2|N) and OSp(8 ⁎ |2n) in four and six dimensions were constructed as minreps and their U(1) and SU(2) deformations, respectively. In this paper we extend these results to SU(2) deformations of the minrep of N=4 superconformal algebra D(2,1;λ) in one dimension. We find that SU(2) deformations can be achieved using n pair of bosons and m pairs of fermions simultaneously. The generators of deformed minimal representations of D(2,1;λ) commute with the generators of a dual superalgebra OSp(2n ⁎ |2m) realized in terms of these bosons and fermions. We show that there exists a precise mapping between symmetry generators of N=4 superconformal models in harmonic superspace studied recently and minimal unitary supermultiplets of D(2,1;λ) deformed by a pair of bosons. This can be understood as a particular case of a general mapping between the spectra of quantum mechanical quaternionic Kähler sigma models with eight super symmetries and minreps of their isometry groups that descends from the precise mapping established between the 4d, N=2 sigma models coupled to supergravity and minreps of their isometry groups.

  17. L1 French learning of L2 Spanish past tenses: L1 transfer versus aspect and interface issues

    Directory of Open Access Journals (Sweden)

    José Amenós Pons

    2017-09-01

    Full Text Available This paper examines the process of acquiring L2s that are closely related to the L1 through data on how adult French speakers learning L2 Spanish in a formal setting develop knowledge and use of past tenses in this L2. We consider the role of transfer and simplification in acquiring mental representations of the L2 grammar, specifically in the area of tense and aspect, and how learners deal with integrating grammatically encoded, lexical and discursive information, including mismatching feature combinations leading to particular inferential effects on interpretation. Data is presented on the Spanish past tenses (simple and compound past, pluperfect, imperfect and progressive forms from two tasks, an oral production filmretell and a multiple-choice interpretation task, completed by learners at A2, B1, B2 and C1 CEFR levels (N = 20-24 per level. L1 influence is progressively attenuated as proficiency increases. Difficulties were not always due to negative L1 transfer, but related also to grammar-discourse interface issues when integrating linguistic and pragmatic information in the interpretation process. This has clear implications for the teaching of closely related languages: instruction should not only focus on crosslinguistic contrasts, but also prioritize uses requiring complex interface integration, which are harder to process.

  18. Evidence from adult L1 Afrikaans L2 French

    African Journals Online (AJOL)

    results of this study show that a large number of the L2 learners had indeed acquired ... position in V2-languages (such as German) and in third position in non-V2 ... L1, allows construction types x and y but he will have no problem acquiring .... Modern Foreign Languages at Stellenbosch University at the time of testing.

  19. L1 norm constrained migration of blended data with the FISTA algorithm

    International Nuclear Information System (INIS)

    Lu, Xinting; Han, Liguo; Yu, Jianglong; Chen, Xue

    2015-01-01

    Blended acquisition significantly improves the seismic acquisition efficiency. However, the direct imaging of blended data is not satisfactory due to the crosstalk contamination. Assuming that the distribution of subsurface reflectivity is sparse, in this paper, we formulate the seismic imaging problem of blended data as a Basis Pursuit denoise (BPDN) problem. Based on compressed sensing, we propose a L1 norm constrained migration method applying to the direct imaging of blended data. The Fast Iterative Shrinkage-Thresholding Algorithm, which is stable and computationally efficient, is implemented in our method. Numerical tests on the theoretical models show that the crosstalk introduced by blended sources is effectively attenuated and the migration quality has been improved enormously. (paper)

  20. L{sub 1/2} regularization based numerical method for effective reconstruction of bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)

    2014-05-14

    Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

  1. SU-E-T-271: Direct Measurement of Tenth Value Layer Thicknesses for High Density Concretes with a Clinical Machine

    Energy Technology Data Exchange (ETDEWEB)

    Tanny, S; Parsai, E [University of Toledo Medical Center, Toledo, OH (United States); Harrell, D; Noller, J [Shielding Construction Solutions, Inc, Tuscon, AZ (United States); Chopra, M [Unviersal Minerals International, Inc, Tuscon, AZ (United States)

    2015-06-15

    Purpose: Use of high density concrete for radiation shielding is increasing, trading cost for space savings associated with the reduced tenth value layer (TVL). Precise information on the attenuation properties of high-density concretes is not readily present in the literature. A simple approximation is to scale the TVLs from NCRP 151 according relative increase in density. Here we present measured TVLs for heavy concretes of various densities using a built-in shielding test port. Methods: Concrete densities tested range from 2.35 g cc{sup −1} (147 pcf) to 5.6 g cc{sup −1} (350 pcf). Measurements were taken using 6MV, 6FFF, and 10FFF on a Varian Truebeam linear accelerator. Field sizes of 4x4, 9x9 and 30x30 cm{sup 2} were measured. A PTW 31013 Farmer chamber with a buildup cap was positioned 5.5 m from isocenter along the beam CAX. Concrete thicknesses were incremented in 5 cm intervals. Comparison TVLs were determined by scaling the NCRP 151 TVLs by the density ratio between the sample and standard density. Results: The trend from the first to equilibrium TVL was an increase in thickness, compared with MC modeling, which predicted a decrease. Measured TVLs for 6 MV were reduced by as much as 8.9 cm for TVL{sub 1} and 3.4 cm for TVL{sub E} compared to values scaled from NCRP 151. There was 1–3 mm difference in TVL between measurements done at 4x4 versus 30x30 cm{sup 2}. TVL{sub 1} for 6FFF was 1.1 cm smaller than TVL{sub 1} for 6MV, but TVL{sub E} was consistent to within 4 mm. TVL{sub 1} and TVL{sub E} for 10FFF were reduced by 8.8 and 3.7 cm from scaled NCRP values, respectively. Conclusions: We have measured the TVL thicknesses for various concretes. Simple density scaling of the values in NCRP 151 is a conservatively safe approximation, but actual TVLs may be reduced enough to eliminate some of the expense of installation. Daniel Harrell and Jim Noller are employees of Shielding Construction Solutions, Inc, the shielding construction company that built

  2. SU-E-T-271: Direct Measurement of Tenth Value Layer Thicknesses for High Density Concretes with a Clinical Machine

    International Nuclear Information System (INIS)

    Tanny, S; Parsai, E; Harrell, D; Noller, J; Chopra, M

    2015-01-01

    Purpose: Use of high density concrete for radiation shielding is increasing, trading cost for space savings associated with the reduced tenth value layer (TVL). Precise information on the attenuation properties of high-density concretes is not readily present in the literature. A simple approximation is to scale the TVLs from NCRP 151 according relative increase in density. Here we present measured TVLs for heavy concretes of various densities using a built-in shielding test port. Methods: Concrete densities tested range from 2.35 g cc −1 (147 pcf) to 5.6 g cc −1 (350 pcf). Measurements were taken using 6MV, 6FFF, and 10FFF on a Varian Truebeam linear accelerator. Field sizes of 4x4, 9x9 and 30x30 cm 2 were measured. A PTW 31013 Farmer chamber with a buildup cap was positioned 5.5 m from isocenter along the beam CAX. Concrete thicknesses were incremented in 5 cm intervals. Comparison TVLs were determined by scaling the NCRP 151 TVLs by the density ratio between the sample and standard density. Results: The trend from the first to equilibrium TVL was an increase in thickness, compared with MC modeling, which predicted a decrease. Measured TVLs for 6 MV were reduced by as much as 8.9 cm for TVL 1 and 3.4 cm for TVL E compared to values scaled from NCRP 151. There was 1–3 mm difference in TVL between measurements done at 4x4 versus 30x30 cm 2 . TVL 1 for 6FFF was 1.1 cm smaller than TVL 1 for 6MV, but TVL E was consistent to within 4 mm. TVL 1 and TVL E for 10FFF were reduced by 8.8 and 3.7 cm from scaled NCRP values, respectively. Conclusions: We have measured the TVL thicknesses for various concretes. Simple density scaling of the values in NCRP 151 is a conservatively safe approximation, but actual TVLs may be reduced enough to eliminate some of the expense of installation. Daniel Harrell and Jim Noller are employees of Shielding Construction Solutions, Inc, the shielding construction company that built the vault discussed in this abstract. Manjit Chopra is

  3. Genetic algorithm based optimization of the process parameters for gas metal arc welding of AISI 904 L stainless steel

    International Nuclear Information System (INIS)

    Sathiya, P.; Ajith, P. M.; Soundararajan, R.

    2013-01-01

    The present study is focused on welding of super austenitic stainless steel sheet using gas metal arc welding process with AISI 904 L super austenitic stainless steel with solid wire of 1.2 mm diameter. Based on the Box - Behnken design technique, the experiments are carried out. The input parameters (gas flow rate, voltage, travel speed and wire feed rate) ranges are selected based on the filler wire thickness and base material thickness and the corresponding output variables such as bead width (BW), bead height (BH) and depth of penetration (DP) are measured using optical microscopy. Based on the experimental data, the mathematical models are developed as per regression analysis using Design Expert 7.1 software. An attempt is made to minimize the bead width and bead height and maximize the depth of penetration using genetic algorithm.

  4. Genetic algorithm based optimization of the process parameters for gas metal arc welding of AISI 904 L stainless steel

    Energy Technology Data Exchange (ETDEWEB)

    Sathiya, P. [National Institute of Technology Tiruchirappalli (India); Ajith, P. M. [Department of Mechanical Engineering Rajiv Gandhi Institute of Technology, Kottayam (India); Soundararajan, R. [Sri Krishna College of Engineering and Technology, Coimbatore (India)

    2013-08-15

    The present study is focused on welding of super austenitic stainless steel sheet using gas metal arc welding process with AISI 904 L super austenitic stainless steel with solid wire of 1.2 mm diameter. Based on the Box - Behnken design technique, the experiments are carried out. The input parameters (gas flow rate, voltage, travel speed and wire feed rate) ranges are selected based on the filler wire thickness and base material thickness and the corresponding output variables such as bead width (BW), bead height (BH) and depth of penetration (DP) are measured using optical microscopy. Based on the experimental data, the mathematical models are developed as per regression analysis using Design Expert 7.1 software. An attempt is made to minimize the bead width and bead height and maximize the depth of penetration using genetic algorithm.

  5. Convergence Performance of Adaptive Algorithms of L-Filters

    Directory of Open Access Journals (Sweden)

    Robert Hudec

    2003-01-01

    Full Text Available This paper deals with convergence parameters determination of adaptive algorithms, which are used in adaptive L-filters design. Firstly the stability of adaptation process, convergence rate or adaptation time, and behaviour of convergence curve belong among basic properties of adaptive algorithms. L-filters with variety of adaptive algorithms were used to their determination. Convergence performances finding of adaptive filters is important mainly for their hardware applications, where filtration in real time or adaptation of coefficient filter with low capacity of input data are required.

  6. A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan

    Science.gov (United States)

    Rameshkumar, K.; Rajendran, C.

    2018-02-01

    In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.

  7. SU-E-T-273: Radiation Shielding for a Fixed Horizontal-Beam Linac in a Shipping Container and a Conventional Treatment Vault

    International Nuclear Information System (INIS)

    Hsieh, M; Balter, P; Beadle, B; Chi, P; Stingo, F; Court, L

    2014-01-01

    Purpose: A fixed horizontal-beam linac, where the patient is treated in a seated position, could lower the overall costs of the treatment unit and room shielding substantially. This design also allows the treatment room and control area to be contained within a reduced space, such as a shipping container. The main application is the introduction of low-cost, high-quality radiation therapy to low- and middle-income regions. Here we consider shielding for upright treatments with a fixed-6MV-beam linac in a shipping container and a conventional treatment vault. Methods: Shielding calculations were done for two treatment room layouts using calculation methods in NCRP Report 151: (1) a shipping container (6m × 2.4m with the remaining space occupied by the console area), and (2) the treatment vault in NCRP 151 (7.8m by 5.4m by 3.4m). The shipping container has a fixed gantry that points in one direction at all times. For the treatment vault, various beam directions were evaluated. Results: The shipping container requires a primary barrier of 168cm concrete (4.5 TVL), surrounded by a secondary barrier of 3.6 TVL. The other walls require between 2.8–3.3 TVL. Multiple shielding calculations were done along the side wall. The results show that patient scatter increases in the forward direction and decreases dramatically in the backward direction. Leakage scatter also varies along the wall, depending largely on the distance between the gantry and the wall. For the treatment room, fixed-beam requires a slightly thicker primary barrier than the conventional linac (0.6 TVL), although this barrier is only needed in the center of one wall. The secondary barrier is different only by 0–0.2 TVL. Conclusion: This work shows that (1) the shipping container option is achievable, using indigenous materials for shielding and (2) upright treatments can be performed in a conventional treatment room with minimal additional shielding. Varian Medical Systems

  8. SU-E-T-273: Radiation Shielding for a Fixed Horizontal-Beam Linac in a Shipping Container and a Conventional Treatment Vault

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, M; Balter, P; Beadle, B; Chi, P; Stingo, F; Court, L [The University of Texas MD Anderson Cancer Center, Houston, TX (United States)

    2014-06-01

    Purpose: A fixed horizontal-beam linac, where the patient is treated in a seated position, could lower the overall costs of the treatment unit and room shielding substantially. This design also allows the treatment room and control area to be contained within a reduced space, such as a shipping container. The main application is the introduction of low-cost, high-quality radiation therapy to low- and middle-income regions. Here we consider shielding for upright treatments with a fixed-6MV-beam linac in a shipping container and a conventional treatment vault. Methods: Shielding calculations were done for two treatment room layouts using calculation methods in NCRP Report 151: (1) a shipping container (6m × 2.4m with the remaining space occupied by the console area), and (2) the treatment vault in NCRP 151 (7.8m by 5.4m by 3.4m). The shipping container has a fixed gantry that points in one direction at all times. For the treatment vault, various beam directions were evaluated. Results: The shipping container requires a primary barrier of 168cm concrete (4.5 TVL), surrounded by a secondary barrier of 3.6 TVL. The other walls require between 2.8–3.3 TVL. Multiple shielding calculations were done along the side wall. The results show that patient scatter increases in the forward direction and decreases dramatically in the backward direction. Leakage scatter also varies along the wall, depending largely on the distance between the gantry and the wall. For the treatment room, fixed-beam requires a slightly thicker primary barrier than the conventional linac (0.6 TVL), although this barrier is only needed in the center of one wall. The secondary barrier is different only by 0–0.2 TVL. Conclusion: This work shows that (1) the shipping container option is achievable, using indigenous materials for shielding and (2) upright treatments can be performed in a conventional treatment room with minimal additional shielding. Varian Medical Systems.

  9. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    Science.gov (United States)

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  10. Validation of SMOS L1C and L2 Products and Important Parameters of the Retrieval Algorithm in the Skjern River Catchment, Western Denmark

    DEFF Research Database (Denmark)

    Bircher, Simone; Skou, Niels; Kerr, Yann H.

    2013-01-01

    -band Microwave Emission of the Biosphere (L-MEB) model with initial guesses on the two parameters (derived from ECMWF products and ECOCLIMAP Leaf Area Index, respectively) and other auxiliary input. This paper presents the validation work carried out in the Skjern River Catchment, Denmark. L1C/L2 data...

  11. Genetic algorithm for lattice gauge theory on SU(2) and U(1) on 4 dimensional lattice, how to hitchhike to thermal equilibrium state

    International Nuclear Information System (INIS)

    Yamaguchi, A.; Sugamoto, A.

    2000-01-01

    Applying Genetic Algorithm for the Lattice Gauge Theory is formed to be an effective method to minimize the action of gauge field on a lattice. In 4 dimensions, the critical point and the Wilson loop behaviour of SU(2) lattice gauge theory as well as the phase transition of U(1) theory have been studied. The proper coding methodi has been developed in order to avoid the increase of necessary memory and the overload of calculation for Genetic Algorithm. How hichhikers toward equilibrium appear against kidnappers is clarified

  12. A Simulated Annealing-Based Heuristic Algorithm for Job Shop Scheduling to Minimize Lateness

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2013-04-01

    Full Text Available A decomposition-based optimization algorithm is proposed for solving large job shop scheduling problems with the objective of minimizing the maximum lateness. First, we use the constraint propagation theory to derive the orientation of a portion of disjunctive arcs. Then we use a simulated annealing algorithm to find a decomposition policy which satisfies the maximum number of oriented disjunctive arcs. Subsequently, each subproblem (corresponding to a subset of operations as determined by the decomposition policy is successively solved with a simulated annealing algorithm, which leads to a feasible solution to the original job shop scheduling problem. Computational experiments are carried out for adapted benchmark problems, and the results show the proposed algorithm is effective and efficient in terms of solution quality and time performance.

  13. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1 βk ≥ 0 2 the search direction has the trust region property without the use of any line search method 3 the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  14. L1 effects on the processing of grammatical gender in L2

    NARCIS (Netherlands)

    Sabourin, L; Foster-Cohen, S.; Nizegorodcew, A.

    2001-01-01

    This paper explores L1 effects on the L2 off-line processing of Dutch (grammatical gender) agreement The L2 participants had either German, English or a Romance language as their L1. Non-gender agreement (finiteness and agreement) was tested to ascertain the level of proficiency of the participants

  15. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    Science.gov (United States)

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  16. Methods of X-ray CT image reconstruction from few projections

    International Nuclear Information System (INIS)

    Wang, H.

    2011-01-01

    To improve the safety (low dose) and the productivity (fast acquisition) of a X-ray CT system, we want to reconstruct a high quality image from a small number of projections. The classical reconstruction algorithms generally fail since the reconstruction procedure is unstable and suffers from artifacts. A new approach based on the recently developed 'Compressed Sensing' (CS) theory assumes that the unknown image is in some sense 'sparse' or 'compressible', and the reconstruction is formulated through a non linear optimization problem (TV/l1 minimization) by enhancing the sparsity. Using the pixel (or voxel in 3D) as basis, to apply the CS framework in CT one usually needs a 'sparsifying' transform, and combines it with the 'X-ray projector' which applies on the pixel image. In this thesis, we have adapted a 'CT-friendly' radial basis of Gaussian family called 'blob' to the CS-CT framework. The blob has better space-frequency localization properties than the pixel, and many operations, such as the X-ray transform, can be evaluated analytically and are highly parallelizable (on GPU platform). Compared to the classical Kaisser-Bessel blob, the new basis has a multi-scale structure: an image is the sum of dilated and translated radial Mexican hat functions. The typical medical objects are compressible under this basis, so the sparse representation system used in the ordinary CS algorithms is no more needed. 2D simulations show that the existing TV and l1 algorithms are more efficient and the reconstructions have better visual quality than the equivalent approach based on the pixel or wavelet basis. The new approach has also been validated on 2D experimental data, where we have observed that in general the number of projections can be reduced to about 50%, without compromising the image quality. (author) [fr

  17. Orbiting Carbon Observatory-2 (OCO-2) cloud screening algorithms; validation against collocated MODIS and CALIOP data

    Science.gov (United States)

    Taylor, T. E.; O'Dell, C. W.; Frankenberg, C.; Partain, P.; Cronk, H. Q.; Savtchenko, A.; Nelson, R. R.; Rosenthal, E. J.; Chang, A. Y.; Fisher, B.; Osterman, G.; Pollock, R. H.; Crisp, D.; Eldering, A.; Gunson, M. R.

    2015-12-01

    The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols within the instrument's field of view (FOV). Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 μm O2 A-band, neglecting scattering by clouds and aerosols, which introduce photon path-length (PPL) differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A-Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 μm (weak CO2 band) and 2.06 μm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which key off of different features in the spectra, provides the basis for cloud screening of the OCO-2 data set. To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning to allow throughputs of ≃ 30 %, agreement between the OCO-2 and MODIS cloud screening methods is found to be

  18. Synthesis, crystal structure and properties of [Co(L2](ClO42 (L=1,3-bis(1H-benzimidazol-2-yl-2-oxapropane

    Directory of Open Access Journals (Sweden)

    Tavman Aydin

    2015-01-01

    Full Text Available The reaction of 1,3-bis(1H-benzimidazol-2-yl-2-oxapropane (L with Co(ClO42•6H2O in absolute ethanol produces di[1,3-bis(1H-benzimidazol-2-yl-2-oxapropane-k2N,N’]cobalt(IIdiperchlorate chelate complex ([Co(L2](ClO42, 1. The complex 1 was characterized by elemental analysis, magnetic moment, molar conductivity, thermogravimetric analysis, FT-IR, UV-visible, mass spectrometry, and its solid state structure was determined by single crystal X-ray diffraction. According to the thermogravimetric analysis data, there is no any water coordinated or uncoordinated in 1 as well as elemental analysis. The complex 1 has 1:2 M:L ionic characteristic according to the molar conductivity. In the complex, the distances between the cobalt and the ethereal oxygen atoms (Co1-O2: 2.805(3; Co2-O1: 2.752(2 Å show the semi-coordination bonding and the Co(II ion is six-coordinated with a N4O2 ligand set, resulting in a distorted octahedron.

  19. Do L1 Reading Achievement and L1 Print Exposure Contribute to the Prediction of L2 Proficiency?

    Science.gov (United States)

    Sparks, Richard L.; Patton, Jon; Ganschow, Leonore; Humbach, Nancy

    2012-01-01

    The study examined whether individual differences in high school first language (L1) reading achievement and print exposure would account for unique variance in second language (L2) written (word decoding, spelling, writing, reading comprehension) and oral (listening/speaking) proficiency after adjusting for the effects of early L1 literacy and…

  20. A Scheduling Algorithm for Minimizing the Packet Error Probability in Clusterized TDMA Networks

    Directory of Open Access Journals (Sweden)

    Arash T. Toyserkani

    2009-01-01

    Full Text Available We consider clustered wireless networks, where transceivers in a cluster use a time-slotted mechanism (TDMA to access a wireless channel that is shared among several clusters. An approximate expression for the packet-loss probability is derived for networks with one or more mutually interfering clusters in Rayleigh fading environments, and the approximation is shown to be good for relevant scenarios. We then present a scheduling algorithm, based on Lagrangian duality, that exploits the derived packet-loss model in an attempt to minimize the average packet-loss probability in the network. Computer simulations of the proposed scheduling algorithm show that a significant increase in network throughput can be achieved compared to uncoordinated scheduling. Empirical trials also indicate that the proposed optimization algorithm almost always converges to an optimal schedule with a reasonable number of iterations. Thus, the proposed algorithm can also be used for bench-marking suboptimal scheduling algorithms.

  1. L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-10-01

    The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.

  2. GLI ERRORI DI ITALIANO L1 ED L2: INTERFERENZA E APPRENDIMENTO

    Directory of Open Access Journals (Sweden)

    Rosaria Solarino

    2011-02-01

    Full Text Available Si può oggi affrontare il tema degli errori di italiano da una prospettiva che possa giovare contemporaneamente a docenti di italiano L1 ed L2? Noi pensiamo di sì: la ricerca glottodidattica sembra aver ormai apprestato un terreno comune alle due situazioni di apprendimento, sgombrando il campo da vecchi pregiudizi e distinzioni che appaiono ormai superate. Attraverso la contrapposizione di concetti quali “lingua parlata/lingua scritta”,  “errori di lingua / errori di linguaggio”, “apprendimento spontaneo/apprendimento guidato”, “italiano L1/italiano L2”, “errori di apprendimento/errori di interferenza, si indicano diversi criteri per la interpretazione degli errori e la loro valutazione in relazione alle cause, alle situazioni comunicative, ai contesti o allo stadio di evoluzione dell’apprendimento della lingua.     Errors in italian L1 and L2: interference and learning   Can errors in Italian be approached in a way that benefits both L1 and L2 Italian teachers? We believe so: glottodidactic research seems to have prepared a common terrain for these two learning situations, clearing the field of old prejudices and obsolete distinctions.  Through the juxtaposition of concepts like “spoken language/written language”, “language errors/speech errors”, “spontaneous learning/guided learning”, “L1 Italian/L2 Italian”, “learning errors/interference errors”, different criteria for interpreting errors and evaluating them in relation to their causes, to communicative situations, to contexts and the developmental state in learning a language are singled out.

  3. Orbiting Carbon Observatory-2 (OCO-2) cloud screening algorithms: validation against collocated MODIS and CALIOP data

    Science.gov (United States)

    Taylor, Thomas E.; O'Dell, Christopher W.; Frankenberg, Christian; Partain, Philip T.; Cronk, Heather Q.; Savtchenko, Andrey; Nelson, Robert R.; Rosenthal, Emily J.; Chang, Albert Y.; Fisher, Brenden; Osterman, Gregory B.; Pollock, Randy H.; Crisp, David; Eldering, Annmarie; Gunson, Michael R.

    2016-03-01

    The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols, i.e., contamination, within the instrument's field of view. Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 µm O2 A band, neglecting scattering by clouds and aerosols, which introduce photon path-length differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 µm (weak CO2 band) and 2.06 µm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which are sensitive to different features in the spectra, provides the basis for cloud screening of the OCO-2 data set.To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning of algorithmic threshold parameters that allows for processing of ≃ 20-25 % of all OCO-2 soundings

  4. L1 Korean and L1 Mandarin L2 English Learners' Acquisition of the Count/Mass Distinction in English

    Science.gov (United States)

    Choi, Sea Hee; Ionin, Tania; Zhu, Yeqiu

    2018-01-01

    This study investigates the second language (L2) acquisition of the English count/mass distinction by speakers of Korean and Mandarin Chinese, with a focus on the semantics of atomicity. It is hypothesized that L1-Korean and L1-Mandarin L2-English learners are influenced by atomicity in the use of the count/mass morphosyntax in English. This…

  5. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    Directory of Open Access Journals (Sweden)

    Yu-Fei Gao

    2017-04-01

    Full Text Available This paper investigates a two-dimensional angle of arrival (2D AOA estimation algorithm for the electromagnetic vector sensor (EMVS array based on Type-2 block component decomposition (BCD tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD method.

  6. Characterization Results for the L(2, 1, 1-Labeling Problem on Trees

    Directory of Open Access Journals (Sweden)

    Zhang Xiaoling

    2017-08-01

    Full Text Available An L(2, 1, 1-labeling of a graph G is an assignment of non-negative integers (labels to the vertices of G such that adjacent vertices receive labels with difference at least 2, and vertices at distance 2 or 3 receive distinct labels. The span of such a labelling is the difference between the maximum and minimum labels used, and the minimum span over all L(2, 1, 1-labelings of G is called the L(2, 1, 1-labeling number of G, denoted by λ2,1,1(G. It was shown by King, Ras and Zhou in [The L(h, 1, 1-labelling problem for trees, European J. Combin. 31 (2010 1295–1306] that every tree T has Δ2(T − 1 ≤ λ2,1,1(T ≤ Δ2(T, where Δ2(T = maxuv∈E(T(d(u + d(v. And they conjectured that almost all trees have the L(2, 1, 1-labeling number attain the lower bound. This paper provides some sufficient conditions for λ2,1,1(T = Δ2(T. Furthermore, we show that the sufficient conditions we provide are also necessary for trees with diameter at most 6.

  7. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    Science.gov (United States)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  8. PRMT1-mediated arginine methylation controls ATXN2L localization

    Energy Technology Data Exchange (ETDEWEB)

    Kaehler, Christian; Guenther, Anika; Uhlich, Anja; Krobitsch, Sylvia, E-mail: krobitsc@molgen.mpg.de

    2015-05-15

    Arginine methylation is a posttranslational modification that is of importance in diverse cellular processes. Recent proteomic mass spectrometry studies reported arginine methylation of ataxin-2-like (ATXN2L), the paralog of ataxin-2, a protein that is implicated in the neurodegenerative disorder spinocerebellar ataxia type 2. Here, we investigated the methylation state of ATXN2L and its significance for ATXN2L localization. We first confirmed that ATXN2L is asymmetrically dimethylated in vivo, and observed that the nuclear localization of ATXN2L is altered under methylation inhibition. We further discovered that ATXN2L associates with the protein arginine-N-methyltransferase 1 (PRMT1). Finally, we showed that neither mutation of the arginine–glycine-rich motifs of ATXN2L nor methylation inhibition alters ATXN2L localization to stress granules, suggesting that methylation of ATXN2L is probably not mandatory. - Highlights: • ATXN2L is asymmetrically dimethylated in vivo. • ATXN2L interacts with PRMT1 under normal and stress conditions. • PRMT1-mediated dimethylation of ATXN2L controls its nuclear localization. • ATXN2L localization to stress granules appears independent of its methylation state.

  9. An improved algorithm for searching all minimal cuts in modified networks

    International Nuclear Information System (INIS)

    Yeh, W.-C.

    2008-01-01

    A modified network is an updated network after inserting a branch string (a special path) between two nodes in the original network. Modifications are common for network expansion or reinforcement evaluation and planning. The problem of searching all minimal cuts (MCs) in a modified network is discussed and solved in this study. The existing best-known methods for solving this problem either needed extensive comparison and verification or failed to solve some special but important cases. Therefore, a more efficient, intuitive and generalized method for searching all MCs without an extensive research procedure is proposed. In this study, we first develop an intuitive algorithm based upon the reformation of all MCs in the original network to search for all MCs in a modified network. Next, the correctness of the proposed algorithm will be analyzed and proven. The computational complexity of the proposed algorithm is analyzed and compared with the existing best-known methods. Finally, two examples illustrate how all MCs are generated in a modified network using the information of all of the MCs in the corresponding original network

  10. L1 and L2 Strategy Use in Reading Comprehension of Chinese EFL Readers

    Science.gov (United States)

    Tsai, Yea-Ru; Ernst, Cheryl; Talley, Paul C.

    2010-01-01

    This study revealed the relationship between L1 (Mandarin Chinese) and L2 (English) strategy use in L2 reading comprehension by focusing on the correlation of L1 reading ability, L2 proficiency and employed reading strategies. The participants, 222 undergraduates learning English as a foreign language (EFL), were classified into skilled and…

  11. Asimmetrie L1/L2: una sfida nella didattica di “lingua e traduzione”.

    Directory of Open Access Journals (Sweden)

    Laura Salmon

    2004-12-01

    Full Text Available Asymmetries Between L1 and L2: a Challenge in the Teaching of "Language and Translation" Language teaching is deeply connected to the cognitive and brain sciences. The attribution of meaning of linguistic messages depends on the interaction between the communicant’s brain and the external world. The interaction is based on the cognitive and emotional experience shared by a L-community. Competence in L2 is acquired by the development of an internal data base of contextualised .units of living speech. (Lurija 1976. L2 teaching is particularly successful when the L2 units are inscribed in memory by means of the functionally equivalent L1 units. Selection of the L1/L2 correspondent units is due to the principle of markedness: each unit of L1 has a functional equivalent in one and only one unit of the L2. Because of the interlingual asymmetries (a set of Italian/Russian examples is given, functional equivalence differs from morphosyntactic and lexical equivalence. The competency in ascribing a degree of markedness to each linguistic unit needs the regular implicit acquisition of the L2 intonational and prosodic system; the processes of the metalinguistic conscious reflection are instead a .hindrance. to the procedural acting of a “living speaker”.

  12. Main: 1L6H [RPSD[Archive

    Lifescience Database Archive (English)

    Full Text Available cule: Non-Specific Lipid Transfer Protein; Chain: A; Synonym: Ltp2 Lipid Transport D.Samuel, P.-C.Lyu D.Samu... SWS:P83210,P83210|PDB; 1L6H; NMR; A=1-69.|Gramene; P83210; -.|InterPro; IPR003612; AAI.|Pfam; PF00234; Tryp..._alpha_amyl; 1.|SMART; SM00499; AAI; 1. Nmr, Minimized Average Structure Length: 69 AA, Molecular weight: 7009 Da AGCNAGQLTVCTGAI

  13. L-Serine overproduction with minimization of by-product synthesis by engineered Corynebacterium glutamicum.

    Science.gov (United States)

    Zhu, Qinjian; Zhang, Xiaomei; Luo, Yuchang; Guo, Wen; Xu, Guoqiang; Shi, Jinsong; Xu, Zhenghong

    2015-02-01

    The direct fermentative production of L-serine by Corynebacterium glutamicum from sugars is attractive. However, superfluous by-product accumulation and low L-serine productivity limit its industrial production on large scale. This study aimed to investigate metabolic and bioprocess engineering strategies towards eliminating by-products as well as increasing L-serine productivity. Deletion of alaT and avtA encoding the transaminases and introduction of an attenuated mutant of acetohydroxyacid synthase (AHAS) increased both L-serine production level (26.23 g/L) and its productivity (0.27 g/L/h). Compared to the parent strain, the by-products L-alanine and L-valine accumulation in the resulting strain were reduced by 87 % (from 9.80 to 1.23 g/L) and 60 % (from 6.54 to 2.63 g/L), respectively. The modification decreased the metabolic flow towards the branched-chain amino acids (BCAAs) and induced to shift it towards L-serine production. Meanwhile, it was found that corn steep liquor (CSL) could stimulate cell growth and increase sucrose consumption rate as well as L-serine productivity. With addition of 2 g/L CSL, the resulting strain showed a significant improvement in the sucrose consumption rate (72 %) and the L-serine productivity (67 %). In fed-batch fermentation, 42.62 g/L of L-serine accumulation was achieved with a productivity of 0.44 g/L/h and yield of 0.21 g/g sucrose, which was the highest production of L-serine from sugars to date. The results demonstrated that combined metabolic and bioprocess engineering strategies could minimize by-product accumulation and improve L-serine productivity.

  14. The Influence of Schema and Cultural Difference on L1 and L2 Reading

    Science.gov (United States)

    Yang, Shi-sheng

    2010-01-01

    Reading in L1 shares numerous basic elements with reading in L2, and the processes also differ greatly. Intriguing questions involve whether there are two parallel cognitive processes at work, or whether there are processing strategies that accommodate both L1 and L2. This paper examines how reading in L1 is different from and similar to reading…

  15. Scattering matrices for Φ1,2 perturbed conformal minimal models in absence of kink states

    International Nuclear Information System (INIS)

    Koubek, A.; Martins, M.J.; Mussardo, G.

    1991-05-01

    We determine the spectrum and the factorizable S-matrices of the massive excitations of the nonunitary minimal models M 2,2n+1 perturbed by the operator Φ 1,2 . These models present no kinks as asymptotic states, as follows from the reduction of the Zhiber-Mikhailov-Shabat model with respect to the quantum group SL(2) q found by Smirnov. We also give the whole set of S-matrices of the nonunitary minimal model M 2,9 perturbed by the operator Φ 1,4 , which is related to a RSOS reduction for the Φ 1.2 operator of the unitary model M 8,9 . The thermodynamical Bethe ansatz and the truncated conformal space approach are applied to these scattering theories in order to support their interpretation. (orig.)

  16. Deterministic and stochastic algorithms for resolving the flow fields in ducts and networks using energy minimization

    Science.gov (United States)

    Sochi, Taha

    2016-09-01

    Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.

  17. Minimizing the symbol-error-rate for amplify-and-forward relaying systems using evolutionary algorithms

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2015-02-01

    In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.

  18. L(2,1)-labelling of Circular-arc Graph

    OpenAIRE

    Paul, Satyabrata; Pal, Madhumangal; Pal, Anita

    2014-01-01

    An L(2,1)-labelling of a graph $G=(V, E)$ is $\\lambda_{2,1}(G)$ a function $f$ from the vertex set V (G) to the set of non-negative integers such that adjacent vertices get numbers at least two apart, and vertices at distance two get distinct numbers. The L(2,1)-labelling number denoted by $\\lambda_{2,1}(G)$ of $G$ is the minimum range of labels over all such labelling. In this article, it is shown that, for a circular-arc graph $G$, the upper bound of $\\lambda_{2,1}(G)$ is $\\Delta+3\\omega$, ...

  19. Multiscale registration of medical images based on edge preserving scale space with application in image-guided radiation therapy

    Science.gov (United States)

    Li, Dengwang; Li, Hongsheng; Wan, Honglin; Chen, Jinhu; Gong, Guanzhong; Wang, Hongjun; Wang, Liming; Yin, Yong

    2012-08-01

    Mutual information (MI) is a well-accepted similarity measure for image registration in medical systems. However, MI-based registration faces the challenges of high computational complexity and a high likelihood of being trapped into local optima due to an absence of spatial information. In order to solve these problems, multi-scale frameworks can be used to accelerate registration and improve robustness. Traditional Gaussian pyramid representation is one such technique but it suffers from contour diffusion at coarse levels which may lead to unsatisfactory registration results. In this work, a new multi-scale registration framework called edge preserving multiscale registration (EPMR) was proposed based upon an edge preserving total variation L1 norm (TV-L1) scale space representation. TV-L1 scale space is constructed by selecting edges and contours of images according to their size rather than the intensity values of the image features. This ensures more meaningful spatial information with an EPMR framework for MI-based registration. Furthermore, we design an optimal estimation of the TV-L1 parameter in the EPMR framework by training and minimizing the transformation offset between the registered pairs for automated registration in medical systems. We validated our EPMR method on both simulated mono- and multi-modal medical datasets with ground truth and clinical studies from a combined positron emission tomography/computed tomography (PET/CT) scanner. We compared our registration framework with other traditional registration approaches. Our experimental results demonstrated that our method outperformed other methods in terms of the accuracy and robustness for medical images. EPMR can always achieve a small offset value, which is closer to the ground truth both for mono-modality and multi-modality, and the speed can be increased 5-8% for mono-modality and 10-14% for multi-modality registration under the same condition. Furthermore, clinical application by adaptive

  20. Multiscale registration of medical images based on edge preserving scale space with application in image-guided radiation therapy

    International Nuclear Information System (INIS)

    Li Dengwang; Wan Honglin; Li Hongsheng; Chen Jinhu; Gong Guanzhong; Yin Yong; Wang Hongjun; Wang Liming

    2012-01-01

    Mutual information (MI) is a well-accepted similarity measure for image registration in medical systems. However, MI-based registration faces the challenges of high computational complexity and a high likelihood of being trapped into local optima due to an absence of spatial information. In order to solve these problems, multi-scale frameworks can be used to accelerate registration and improve robustness. Traditional Gaussian pyramid representation is one such technique but it suffers from contour diffusion at coarse levels which may lead to unsatisfactory registration results. In this work, a new multi-scale registration framework called edge preserving multiscale registration (EPMR) was proposed based upon an edge preserving total variation L1 norm (TV-L1) scale space representation. TV-L1 scale space is constructed by selecting edges and contours of images according to their size rather than the intensity values of the image features. This ensures more meaningful spatial information with an EPMR framework for MI-based registration. Furthermore, we design an optimal estimation of the TV-L1 parameter in the EPMR framework by training and minimizing the transformation offset between the registered pairs for automated registration in medical systems. We validated our EPMR method on both simulated mono- and multi-modal medical datasets with ground truth and clinical studies from a combined positron emission tomography/computed tomography (PET/CT) scanner. We compared our registration framework with other traditional registration approaches. Our experimental results demonstrated that our method outperformed other methods in terms of the accuracy and robustness for medical images. EPMR can always achieve a small offset value, which is closer to the ground truth both for mono-modality and multi-modality, and the speed can be increased 5–8% for mono-modality and 10–14% for multi-modality registration under the same condition. Furthermore, clinical application by

  1. Metacognitive Online Reading Strategy Use: Readers' Perceptions in L1 and L2

    Science.gov (United States)

    Taki, Saeed

    2016-01-01

    This study aimed to explore whether first-language (L1) readers of different language backgrounds would employ similar metacognitive online reading strategies and whether reading online in a second language (L2) could be influenced by L1 reading strategies. To this end, 52 Canadian college students as English L1 readers and 38 Iranian university…

  2. TES Level 1 Algorithms: Interferogram Processing, Geolocation, Radiometric, and Spectral Calibration

    Science.gov (United States)

    Worden, Helen; Beer, Reinhard; Bowman, Kevin W.; Fisher, Brendan; Luo, Mingzhao; Rider, David; Sarkissian, Edwin; Tremblay, Denis; Zong, Jia

    2006-01-01

    The Tropospheric Emission Spectrometer (TES) on the Earth Observing System (EOS) Aura satellite measures the infrared radiance emitted by the Earth's surface and atmosphere using Fourier transform spectrometry. The measured interferograms are converted into geolocated, calibrated radiance spectra by the L1 (Level 1) processing, and are the inputs to L2 (Level 2) retrievals of atmospheric parameters, such as vertical profiles of trace gas abundance. We describe the algorithmic components of TES Level 1 processing, giving examples of the intermediate results and diagnostics that are necessary for creating TES L1 products. An assessment of noise-equivalent spectral radiance levels and current systematic errors is provided. As an initial validation of our spectral radiances, TES data are compared to the Atmospheric Infrared Sounder (AIRS) (on EOS Aqua), after accounting for spectral resolution differences by applying the AIRS spectral response function to the TES spectra. For the TES L1 nadir data products currently available, the agreement with AIRS is 1 K or better.

  3. Neutrino masses in the minimal gauged (B -L ) supersymmetry

    Science.gov (United States)

    Yan, Yu-Li; Feng, Tai-Fu; Yang, Jin-Lei; Zhang, Hai-Bin; Zhao, Shu-Min; Zhu, Rong-Fei

    2018-03-01

    We present the radiative corrections to neutrino masses in a minimal supersymmetric extension of the standard model with local U (1 )B -L symmetry. At tree level, three tiny active neutrinos and two nearly massless sterile neutrinos can be obtained through the seesaw mechanism. Considering the one-loop corrections to the neutrino masses, the numerical results indicate that two sterile neutrinos obtain KeV masses and the small active-sterile neutrino mixing angles. The lighter sterile neutrino is a very interesting dark matter candidate in cosmology. Meanwhile, the active neutrinos mixing angles and mass squared differences agree with present experimental data.

  4. Discourse Connectives in L1 and L2 Argumentative Writing

    Science.gov (United States)

    Hu, Chunyu; Li, Yuanyuan

    2015-01-01

    Discourse connectives (DCs) are multi-functional devices used to connect discourse segments and fulfill interpersonal levels of discourse. This study investigates the use of selected 80 DCs within 11 categories in the argumentative essays produced by L1 and L2 university students. The analysis is based on the International Corpus Network of Asian…

  5. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    Science.gov (United States)

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  6. L2 Word Recognition: Influence of L1 Orthography on Multi-Syllabic Word Recognition

    Science.gov (United States)

    Hamada, Megumi

    2017-01-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on…

  7. L1-L2 Transfer in the Narrative Styles of Chinese EFL Learners' Written Personal Narratives

    Science.gov (United States)

    Su, I-Ru; Chou, Yi-Chun

    2016-01-01

    Most of the research on second language (L2) narratives has focused on whether or how L2 learners carry their L1 narrative styles into L2 narration; few studies have explored whether L2 learners' knowledge of the L2 also in turn affects their L1 narrative performance. The present study attempted to probe the issue of cultural transfer in narrative…

  8. Simulation of LOFT anticipated-transient experiments L6-1, L6-2, and L6-3 using TRAC-PF1/MOD1

    International Nuclear Information System (INIS)

    Sahota, M.S.

    1984-01-01

    Anticipated-transient experiments L6-1, L6-2, and L6-3, performed at the Loss-of-fluid Test (LOFT) facility, are analyzed using the latest released version of the Transient Reactor Analysis Code (TRAC-PF1/MOD1). The results are used to assess TRAC-PF1/MOD1 trip and control capabilities, and predictions of thermal-hydraulic phenomena during slow transients. Test L6-1 simulated a loss-of-stream load in a large pressurized-water reactor (PWR), and was initiated by closing the main steam-flow control valve (MSFCV) at its maximum rate, which reduced the heat removal from the secondary-coolant system and increased the primary-coolant system pressure that initiated a reactor scram. Test L6-2 simulated a loss-of-primary coolant flow in a large PWR, and was initiated by tripping the power to the primary-coolant pumps (PCPs) allowing the pumps to coast down. The reduced primary-coolant flow caused a reactor scram. Test L6-3 simulated an excessive-load increase incident in a large PWR, and was initiated by opening the MSFCV at its maximum rate, which increased the heat removal from the secondary-coolant system and decreased the primary-coolant system pressure that initiated a reactor scram. The TRAC calculations accurately predict most test events. The test data and the calculated results for most parameters of interest also agree well

  9. Vibrational spectroscopic and theoretical study of 3,5-dimethyl-1-thiocarboxamide pyrazole (L) and the complexes Co2L2Cl4, Cu2L2Cl4 and Cu2L2Br2

    International Nuclear Information System (INIS)

    Nemcsok, Denes; Kovacs, Attila; Szecsenyi, Katalin Meszaros; Leovac, Vukadin M.

    2006-01-01

    In the present paper we report a joint experimental and theoretical study of 3,5-dimethyl-1-thiocarboxamide pyrazole (L) and its complexes Co 2 L 2 Cl 4 , Cu 2 L 2 Cl 4 and Cu 2 L 2 Br 2 . DFT computations were used to model the structural and bonding properties of the title compounds as well as to derive a reliable force field for the normal coordinate analysis of L. The computations indicated the importance of hydrogen bonding interactions in stabilising the global minimum structures on the potential energy surfaces. In contrast to the S-bridged binuclear Cu 2 L 2 Br 2 complex found in the crystal, our computations predicted the formation of (CuLBr) 2 dimers in the isolated state stabilized by very strong (53 kJ/mol) N-H...Br hydrogen bonding interactions. On the basis of FT-IR and FT-Raman experiments and the DFT-derived scaled quantum mechanical force field we carried out a complete normal coordinate analysis of L. The FT-IR spectra of the three complexes were interpreted using the present assignment of L, literature data and computed results

  10. A new algorithm for optimum voltage and reactive power control for minimizing transmission lines losses

    International Nuclear Information System (INIS)

    Ghoudjehbaklou, H.; Danai, B.

    2001-01-01

    Reactive power dispatch for voltage profile modification has been of interest to power utilities. Usually local bus voltages can be altered by changing generator voltages, reactive shunts, ULTC transformers and SVCs. Determination of optimum values for control parameters, however, is not simple for modern power system networks. Heuristic and rather intelligent algorithms have to be sought. In this paper a new algorithm is proposed that is based on a variant of a genetic algorithm combined with simulated annealing updates. In this algorithm a fuzzy multi-objective a approach is used for the fitness function of the genetic algorithm. This fuzzy multi-objective function can efficiently modify the voltage profile in order to minimize transmission lines losses, thus reducing the operating costs. The reason for such a combination is to utilize the best characteristics of each method and overcome their deficiencies. The proposed algorithm is much faster than the classical genetic algorithm and cna be easily integrated into existing power utilities software. The proposed algorithm is tested on an actual system model of 1284 buses, 799 lines, 1175 fixed and ULTC transformers, 86 generators, 181 controllable shunts and 425 loads

  11. Pollution prevention/waste minimization program 1998 fiscal year work plan - WBS 1.11.2.1

    International Nuclear Information System (INIS)

    Howald, S.C.; Merry, D.S.

    1997-09-01

    Pollution Prevention/Waste Minimization (P2/WMin) is the Department of Energy's preferred approach to environmental management. The P2/WMin mission is to eliminate or minimize waste generation, pollutant releases to the environment, use of toxic substances, and to conserve resources by implementing cost-effective pollution prevention technologies, practices, and polices

  12. Homotopy Algorithm for Fixed Order Mixed H2/H(infinity) Design

    Science.gov (United States)

    Whorton, Mark; Buschek, Harald; Calise, Anthony J.

    1996-01-01

    Recent developments in the field of robust multivariable control have merged the theories of H-infinity and H-2 control. This mixed H-2/H-infinity compensator formulation allows design for nominal performance by H-2 norm minimization while guaranteeing robust stability to unstructured uncertainties by constraining the H-infinity norm. A key difficulty associated with mixed H-2/H-infinity compensation is compensator synthesis. A homotopy algorithm is presented for synthesis of fixed order mixed H-2/H-infinity compensators. Numerical results are presented for a four disk flexible structure to evaluate the efficiency of the algorithm.

  13. L1/L2 Differences in the Acquisition of Form-Meaning Pairings in a Second Language

    Science.gov (United States)

    McManus, Kevin

    2015-01-01

    This paper examines the impact of L1/L2 form-meaning differences in the domain of aspect to investigate whether L2 learners are able to acquire properties of the L2 that are different from the L1. Oral data were collected from English- and German-speaking university learners of French L2 (n = 75) at two different levels of proficiency. The results…

  14. An Effective Hybrid Routing Algorithm in WSN: Ant Colony Optimization in combination with Hop Count Minimization

    Directory of Open Access Journals (Sweden)

    Ailian Jiang

    2018-03-01

    Full Text Available Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs. This paper investigates the existing ant colony optimization (ACO-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime.

  15. Comparing L1 and L2 Texts and Writers in First-Year Composition

    Science.gov (United States)

    Eckstein, Grant; Ferris, Dana

    2018-01-01

    Scholars have at various points discussed the needs of second language (L2) writers enrolled in "mainstream" composition courses where they are mixed with native (L1) English speakers. Other researchers have investigated the experiences of L2 writers in mainstream classes and the perceptions of their instructors about their abilities and…

  16. Circulating cell-derived microparticles in patients with minimally symptomatic obstructive sleep apnoea.

    Science.gov (United States)

    Ayers, L; Ferry, B; Craig, S; Nicoll, D; Stradling, J R; Kohler, M

    2009-03-01

    Moderate-severe obstructive sleep apnoea (OSA) has been associated with several pro-atherogenic mechanisms and increased cardiovascular risk, but it is not known if minimally symptomatic OSA has similar effects. Circulating cell-derived microparticles have been shown to have pro-inflammatory, pro-coagulant and endothelial function-impairing effects, as well as to predict subclinical atherosclerosis and cardiovascular risk. In 57 patients with minimally symptomatic OSA, and 15 closely matched control subjects without OSA, AnnexinV-positive, platelet-, leukocyte- and endothelial cell-derived microparticles were measured by flow cytometry. In patients with OSA, median (interquartile range) levels of AnnexinV-positive microparticles were significantly elevated compared with control subjects: 2,586 (1,566-3,964) microL(-1) versus 1,206 (474-2,501) microL(-1), respectively. Levels of platelet-derived and leukocyte-derived microparticles were also significantly higher in patients with OSA (2,267 (1,102-3,592) microL(-1) and 20 (14-31) microL(-1), respectively) compared with control subjects (925 (328-2,068) microL(-1) and 15 (5-23) microL(-1), respectively). Endothelial cell-derived microparticle levels were similar in patients with OSA compared with control subjects (13 (8-25) microL(-1) versus 11 (6-17) microL(-1)). In patients with minimally symptomatic obstructive sleep apnoea, levels of AnnexinV-positive, platelet- and leukocyte-derived microparticles are elevated when compared with closely matched control subjects without obstructive sleep apnoea. These findings suggest that these patients may be at increased cardiovascular risk, despite being minimally symptomatic.

  17. Design Considerations and Validation of Tenth Value Layer Used for a Medical Linear Accelerator Bunker Using High Density Concrete

    International Nuclear Information System (INIS)

    Peet, Deborah; Horton, Patrick; Jones, Matthew; Ramsdale, Malcolm

    2006-01-01

    A bunker for the containment and medical use of 10 MV and 6 MV X-rays from a linear accelerator was designed to be added on to four existing bunkers. Space was limited and the walls of the bunker were built using Magnadense, a high density aggregate mined in Sweden and imported into the UK by Minelco Minerals Ltd. The density was specified by the user to be a minimum of 3800 kg/m 3 . This reduced the thickness of primary and secondary shielding over that required using standard concrete. Standard concrete (density 2350 kg/m 3 ) was used for the roof of the bunker. No published data for the tenth value layer (T.V.L.) of the high density concrete were available and values of T.V.L. were derived from those for standard concrete using the ratio of density. Calculations of wall thickness along established principles using normal assumptions and dose constraints resulted in a design with minimum primary wall barriers of 1500 mm and secondary barriers of between 800 mm and 1000 mm of high density concrete. Following construction, measurements were made of the dose rates outside the shielding thereby allowing estimates of the T.V.L. of the material for 6 and 10 MV X-rays. The instantaneous dose rates outside the primary barrier walls were calculated to be less than 6 x 10 -6 Sv/hr but on measurement were found to be more than a factor of 4 times lower than this. Calculations were reviewed and the T.V.L. was found to be 12% greater than that required to achieve the measured dose rate. On the roof, the instantaneous dose rate at the primary barrier was measured to be within 3% of that predicted using the published values of T.V.L. for standard concrete. Sample cubes of standard and high density concrete poured during construction showed that the density of the standard concrete in the roof was close to that used in the design whereas the physical density of Magnadense concrete was on average 5% higher than that specified. In conclusion, values of T.V.L. for the high density

  18. Object Tracking via 2DPCA and l2-Regularization

    Directory of Open Access Journals (Sweden)

    Haijun Wang

    2016-01-01

    Full Text Available We present a fast and robust object tracking algorithm by using 2DPCA and l2-regularization in a Bayesian inference framework. Firstly, we model the challenging appearance of the tracked object using 2DPCA bases, which exploit the strength of subspace representation. Secondly, we adopt the l2-regularization to solve the proposed presentation model and remove the trivial templates from the sparse tracking method which can provide a more fast tracking performance. Finally, we present a novel likelihood function that considers the reconstruction error, which is concluded from the orthogonal left-projection matrix and the orthogonal right-projection matrix. Experimental results on several challenging image sequences demonstrate that the proposed method can achieve more favorable performance against state-of-the-art tracking algorithms.

  19. Approximate L0 constrained Non-negative Matrix and Tensor Factorization

    DEFF Research Database (Denmark)

    Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai

    2008-01-01

    Non-negative matrix factorization (NMF), i.e. V = WH where both V, W and H are non-negative has become a widely used blind source separation technique due to its part based representation. The NMF decomposition is not in general unique and a part based representation not guaranteed. However...... constraint. In general, solving for a given L0 norm is an NP hard problem thus convex relaxatin to regularization by the L1 norm is often considered, i.e., minimizing ( 1/2 ||V-WHk||^2+lambda|H|_1). An open problem is to control the degree of sparsity imposed. We here demonstrate that a full regularization......, the L1 regularization strength lambda that best approximates a given L0 can be directly accessed and in effect used to control the sparsity of H. The MATLAB code for the NLARS algorithm is available for download....

  20. Cognitive Style and Reading Comprehension in L1 and L2.

    Science.gov (United States)

    Vivaldo-Lima, Javier

    This paper presents the results of a research study carried out with Mexican college students to analyze the relationship between readers' cognitive styles (field dependent/independent) and their performance at different levels of written discourse processing in Spanish (L1) and English (L2). The sample for the study included 452 undergraduate…

  1. Characterization and immunogenicity of rLipL32/1-LipL21-OmpL1/2 fusion protein as a novel immunogen for a vaccine against Leptospirosis

    Directory of Open Access Journals (Sweden)

    Zhao Xin

    2015-01-01

    Full Text Available Vaccination is an effective strategy to prevent leptospirosis, a global zoonotic disease caused by infection with pathogenic Leptospira species. However, the currently used multiple-valence vaccine, which is prepared with whole cells of several Leptospira serovars, has major side effects, while its cross-immunogenicity among different Leptospira serovars is weak. LipL32, LipL21 and 2 OmpL1 have been confirmed as surface-exposed antigens in all pathogenic Leptospira strains, but their immunoprotective efficiency needs to be improved. In the present study, we generated a fusion gene lipL32/1-lipL21-ompL1/2 using primer-linking PCR and an engineered E. coli strain to express the recombinant fusion protein rLipL32/1-LipL21-OmpL1/2 (rLLO. Subsequently, the expression conditions were optimized using a central composite design that increased the fusion protein yield 2.7-fold. Western blot assays confirmed that rLLO was recognized by anti-rLipL32/1, anti-rLipL21, and anti-rOmpL1/2 sera as well as 98.5% of the sera from leptospirosis patients. The microscopic agglutination test (MAT demonstrated that rLLO antiserum had a stronger ability to agglutinate the strains of different Leptospira serovars than the rLipL32/1, rLipL21, and rOmpL1/2 antisera. More importantly, tests in hamsters showed that rLLO provided higher immunoprotective rates (91.7% than rLipL32/1, rLipL21 and rOmpL1/2 (50.0-75.0%. All the data indicate that rLLO, a recombinant fusion protein incorporating three antigens, has increased antigenicity and immunoprotective effects, and so can be used as a novel immunogen to develop a universal genetically engineered vaccine against leptospirosis.

  2. VIPRAM_L1CMS: a 2-Tier 3D Architecture for Pattern Recognition for Track Finding

    Energy Technology Data Exchange (ETDEWEB)

    Hoff, J. R. [Fermilab; Joshi, Joshi,S. [Northwestern U.; Liu, Liu, [Fermilab; Olsen, J. [Fermilab; Shenai, A. [Fermilab

    2017-06-15

    In HEP tracking trigger applications, flagging an individual detector hit is not important. Rather, the path of a charged particle through many detector layers is what must be found. Moreover, given the increased luminosity projected for future LHC experiments, this type of track finding will be required within the Level 1 Trigger system. This means that future LHC experiments require not just a chip capable of high-speed track finding but also one with a high-speed readout architecture. VIPRAM_L1CMS is 2-Tier Vertically Integrated chip designed to fulfill these requirements. It is a complete pipelined Pattern Recognition Associative Memory (PRAM) architecture including pattern recognition, result sparsification, and readout for Level 1 trigger applications in CMS with 15-bit wide detector addresses and eight detector layers included in the track finding. Pattern recognition is based on classic Content Addressable Memories with a Current Race Scheme to reduce timing complexity and a 4-bit Selective Precharge to minimize power consumption. VIPRAM_L1CMS uses a pipelined set of priority-encoded binary readout structures to sparsify and readout active road flags at frequencies of at least 100MHz. VIPRAM_L1CMS is designed to work directly with the Pulsar2b Architecture.

  3. The Status of the Auxiliary "Do" in L1 and L2 English Negative Clauses

    Science.gov (United States)

    Perales, Susana

    2010-01-01

    This paper addresses the issue of whether negative sentences containing auxiliary "do" in L1 and L2 English share the same underlying syntactic representation. To this end, I compare the negative sentences produced by 77 bilingual (Spanish/Basque) L2 learners of English with the corresponding data available for L1 acquirers reported on in Schutze…

  4. Pragmatic assessment of schizophrenic bilinguals' L1 and L2 use ...

    African Journals Online (AJOL)

    This paper reports on a study investigating the pragmatic skills and deficits of schizophrenic bilinguals in their spontaneous first language (L1) and second language (L2) speech. Smit (2009) (see also Smit et al., this volume) argues that the locus of deficits in schizophrenic speech is semantics and suggests that a next step ...

  5. Iterative CT reconstruction via minimizing adaptively reweighted total variation.

    Science.gov (United States)

    Zhu, Lei; Niu, Tianye; Petrongolo, Michael

    2014-01-01

    Iterative reconstruction via total variation (TV) minimization has demonstrated great successes in accurate CT imaging from under-sampled projections. When projections are further reduced, over-smoothing artifacts appear in the current reconstruction especially around the structure boundaries. We propose a practical algorithm to improve TV-minimization based CT reconstruction on very few projection data. Based on the theory of compressed sensing, the L-0 norm approach is more desirable to further reduce the projection views. To overcome the computational difficulty of the non-convex optimization of the L-0 norm, we implement an adaptive weighting scheme to approximate the solution via a series of TV minimizations for practical use in CT reconstruction. The weight on TV is initialized as uniform ones, and is automatically changed based on the gradient of the reconstructed image from the previous iteration. The iteration stops when a small difference between the weighted TV values is observed on two consecutive reconstructed images. We evaluate the proposed algorithm on both a digital phantom and a physical phantom. Using 20 equiangular projections, our method reduces reconstruction errors in the conventional TV minimization by a factor of more than 5, with improved spatial resolution. By adaptively reweighting TV in iterative CT reconstruction, we successfully further reduce the projection number for the same or better image quality.

  6. Multi-objective optimization design of air distribution of grate cooler by entropy generation minimization and genetic algorithm

    International Nuclear Information System (INIS)

    Shao, Wei; Cui, Zheng; Cheng, Lin

    2016-01-01

    Highlights: • A multi-objective optimization model of air distribution of grate cooler by genetic algorithm is proposed. • Pareto Front is obtained and validated by comparing with operating data. • Optimal schemes are compared and selected by engineering background. • Total power consumption after optimization decreases 61.10%. • Thickness of clinker on three grate plates is thinner. - Abstract: The cooling air distributions of grate cooler exercise a great influence on the clinker cooling efficiency and power consumption of cooling fans. A multi-objective optimization model of air distributions of grate cooler with cross-flow heat exchanger analogy is proposed in this paper. Firstly, thermodynamic and flow models of clinker cooling process is carried out. Then based on entropy generation minimization analysis, modified entropy generation numbers caused by heat transfer and pressure drop are chosen as objective functions respectively which optimized by genetic algorithm. The design variables are superficial velocities of air chambers and thicknesses of clinker layers on different grate plates. A set of Pareto optimal solutions which two objectives are optimized simultaneously is achieved. Scattered distributions of design variables resulting in the conflict between two objectives are brought out. The final optimal air distribution and thicknesses of clinker layers are selected from the Pareto optimal solutions based on power consumption of cooling fans minimization and validated by measurements. Compared with actual operating scheme, the total air volumes of optimized schemes decrease 2.4%, total power consumption of cooling fans decreases 61.1% and the outlet temperature of clinker decreases 122.9 °C which shows a remarkable energy-saving effect on energy consumption.

  7. Sterile neutrino in a minimal three-generation see-saw model

    Indian Academy of Sciences (India)

    Sterile neutrino in a minimal three-generation see-saw model. Table 1. Relevant right-handed fermion and scalar fields and their transformation properties. Here we have defined Y. I3R· (B–L)/2. SU´2µL ¢U´1µI3R ¢U´1µB L. SU´2µL ¢UY ´1µ. Le ·Lµ Lτ. Seµ. 2R ν R. (1,1/2, 1). (1,0). 1. 1 ν·R. (1,1/2, 1). (1,0). 1. 1. ντR. (1, 1/2, 1).

  8. Lunar and Lagrangian Point L1 L2 CubeSat Communication and Navigation Considerations

    Science.gov (United States)

    Schaire, Scott; Wong, Yen F.; Altunc, Serhat; Bussey, George; Shelton, Marta; Folta, Dave; Gramling, Cheryl; Celeste, Peter; Anderson, Mile; Perrotto, Trish; hide

    2017-01-01

    CubeSats have grown in sophistication to the point that relatively low-cost mission solutions could be undertaken for planetary exploration. There are unique considerations for lunar and L1/L2 CubeSat communication and navigation compared with low earth orbit CubeSats. This paper explores those considerations as they relate to the Lunar IceCube Mission. The Lunar IceCube is a CubeSat mission led by Morehead State University with participation from NASA Goddard Space Flight Center, Jet Propulsion Laboratory, the Busek Company and Vermont Tech. It will search for surface water ice and other resources from a high inclination lunar orbit. Lunar IceCube is one of a select group of CubeSats designed to explore beyond low-earth orbit that will fly on NASA’s Space Launch System (SLS) as secondary payloads for Exploration Mission (EM) 1. Lunar IceCube and the EM-1 CubeSats will lay the groundwork for future lunar and L1/L2 CubeSat missions. This paper discusses communication and navigation needs for the Lunar IceCube mission and navigation and radiation tolerance requirements related to lunar and L1/L2 orbits. Potential CubeSat radios and antennas for such missions are investigated and compared. Ground station coverage, link analysis, and ground station solutions are also discussed. This paper will describe modifications in process for the Morehead ground station, as well as further enhancements of the Morehead ground station and NASA Near Earth Network (NEN) that are being considered. The potential NEN enhancements include upgrading current NEN Cortex receiver with Forward Error Correction (FEC) Turbo Code, providing X-band uplink capability, and adding ranging options. The benefits of ground station enhancements for CubeSats flown on NASA Exploration Missions (EM) are presented. This paper also describes how the NEN may support lunar and L1/L2 CubeSats without any enhancements. In addition, NEN is studying other initiatives to better support the CubeSat community

  9. MINIMIZACIÓN DE UNA FUNCIÓN DE ORDEN P MEDIANTE UN ALGORITMO GENÉTICO // MINIMIZING A FUNCTION OF ORDER P USING A GENETIC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Rómulo Castillo Cárdenas

    2013-06-01

    Full Text Available In this work we consider the problem OVO (order value optimization. The problem we address is to minimize f with x 2 by a genetic algorithm that by its very nature has the advantage over existing continuous optimization methods, to nd global minimizers. We illustrate the application of this algorithm on examples considered showing its e ectiveness in solving them.// RESUMEN En el presente trabajo consideramos el problema OVO (order value optimization. El problema que abordamos consiste entonces en minimizar f con x 2 por medio de un algoritmo gen etico que por su naturaleza intrínseca tiene la ventaja, sobre métodos de optimización continua existentes, de encontrar minimizadores globales. Ilus- tramos la aplicación de este algoritmo sobre ejemplos considerados mostrando su eficacia en la resolución de los mismos.

  10. Biodegradable films of starch/PVOH/alginate in packaging systems for minimally processed lettuce (Lactuca sativa L.

    Directory of Open Access Journals (Sweden)

    Renata Paula Herrera Brandelero

    Full Text Available ABSTRACT Biodegradable packaging may replace non-biodegradable materials when the shelf life of the packaged product is relatively short, as in minimally processed foods. The objective of this work was to evaluate the efficiency of biodegradable films comprising starch/polyvinyl alcohol (PVOH/alginate with the addition of 0 or 0.5% of essential oil of copaiba (EOCP or lemongrass (EOLM compared to poly-vinyl chloride (PVC films in the storage of minimally processed lettuce. Lettuce samples cut into 1-cm strips were placed in polypropylene trays wrapped with biodegradable films and stored at 6 ± 2 °C for 8 days. PVC films were used as controls. The biofilms presented 11.43-8.11 MPa resistance and 11.3-13.22% elongation, with water vapor permeability (WVP of 0.5-4.04 x 10-12 g. s-1.Pa-1.m-1; thus, the films' properties were considered suitable for the application. The lettuce stored in PVC presented minor total soluble solids (TSS, less luminosity (L, higher intensity of yellow color (b, and eight times less mass loss than that stored in biodegradable films. Multivariate analysis showed that the lettuce lost quality after 2 days of storage in PVC films, representing a different result from the other treatments. Lettuce stored in biodegradable films for 2 and 4 days showed a greater similarity with newly harvested lettuce (time zero. The films with or without the addition of essential oil showed similar characteristics. Biodegradable films were considered viable for the storage of minimally processed lettuce.

  11. Methods of X-ray CT image reconstruction from few projections; Methodes de reconstruction d'images a partir d'un faible nombre de projections en tomographie par rayons X

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H.

    2011-10-24

    To improve the safety (low dose) and the productivity (fast acquisition) of a X-ray CT system, we want to reconstruct a high quality image from a small number of projections. The classical reconstruction algorithms generally fail since the reconstruction procedure is unstable and suffers from artifacts. A new approach based on the recently developed 'Compressed Sensing' (CS) theory assumes that the unknown image is in some sense 'sparse' or 'compressible', and the reconstruction is formulated through a non linear optimization problem (TV/l1 minimization) by enhancing the sparsity. Using the pixel (or voxel in 3D) as basis, to apply the CS framework in CT one usually needs a 'sparsifying' transform, and combines it with the 'X-ray projector' which applies on the pixel image. In this thesis, we have adapted a 'CT-friendly' radial basis of Gaussian family called 'blob' to the CS-CT framework. The blob has better space-frequency localization properties than the pixel, and many operations, such as the X-ray transform, can be evaluated analytically and are highly parallelizable (on GPU platform). Compared to the classical Kaisser-Bessel blob, the new basis has a multi-scale structure: an image is the sum of dilated and translated radial Mexican hat functions. The typical medical objects are compressible under this basis, so the sparse representation system used in the ordinary CS algorithms is no more needed. 2D simulations show that the existing TV and l1 algorithms are more efficient and the reconstructions have better visual quality than the equivalent approach based on the pixel or wavelet basis. The new approach has also been validated on 2D experimental data, where we have observed that in general the number of projections can be reduced to about 50%, without compromising the image quality. (author) [French] Afin d'ameliorer la surete (faible dose) et la productivite (acquisition rapide) du

  12. The Design of Morphological/Linguistic Data in L1 and L2 ...

    African Journals Online (AJOL)

    user

    learners' MA of an L1 or L2 can be measured (and, as an extension to this, that users' and .... and they could activate these strategies and skills when confronted with new .... cussion of a wide range of morphological phenomena in English.

  13. Oestrogen directly inhibits the cardiovascular L-type Ca2+ channel Cav1.2

    International Nuclear Information System (INIS)

    Ullrich, Nina D.; Koschak, Alexandra; MacLeod, Kenneth T.

    2007-01-01

    Oestrogen can modify the contractile function of vascular smooth muscle and cardiomyocytes. The negative inotropic actions of oestrogen on the heart and coronary vasculature appear to be mediated by L-type Ca 2+ channel (Ca v 1.2) inhibition, but the underlying mechanisms remain elusive. We tested the hypothesis that oestrogen directly inhibits the cardiovascular L-type Ca 2+ current, I CaL . The effect of oestrogen on I CaL was measured in Ca v 1.2-transfected HEK-293 cells using the whole-cell patch-clamp technique. The current revealed typical activation and inactivation profiles of nifedipine- and cadmium-sensitive I CaL . Oestrogen (50 μM) rapidly reduced I CaL by 50% and shifted voltage-dependent activation and availability to more negative potentials. Furthermore, oestrogen blocked the Ca 2+ channel in a rate-dependent way, exhibiting higher efficiency of block at higher stimulation frequencies. Our data suggest that oestrogen inhibits I CaL through direct interaction of the steroid with the channel protein

  14. New approach to high energy SU/sub 2L/ /times/ U1 radiative corrections

    International Nuclear Information System (INIS)

    Ward, B.F.L.

    1988-07-01

    We present a new approach to SU/sub 2L/ /times/ U 1 radiative corrections at high energies. Our approach is based on the infrared summation methods of Yennie, Frautschi and Suura, taken together with the Weinberg-'t Hooft renormalization group equation. Specific processes which have been realized via explicit Monte Carlo algorithms are e + e/sup /minus// → f/bar f/' + n(γ), f = μ, /tau/, d, s, u, c, b or t and e + e/sup /minus// → e + e/sup /minus// + n(γ), where n(γ), denotes multiple photo emission on an event-by-event basis. Exemplary Monte Carlo data are presented. 16 refs., 4 figs

  15. Gold nanoparticles functionalized with a fragment of the neural cell adhesion molecule L1 stimulate L1-mediated functions

    Science.gov (United States)

    Schulz, Florian; Lutz, David; Rusche, Norman; Bastús, Neus G.; Stieben, Martin; Höltig, Michael; Grüner, Florian; Weller, Horst; Schachner, Melitta; Vossmeyer, Tobias; Loers, Gabriele

    2013-10-01

    The neural cell adhesion molecule L1 is involved in nervous system development and promotes regeneration in animal models of acute and chronic injury of the adult nervous system. To translate these conducive functions into therapeutic approaches, a 22-mer peptide that encompasses a minimal and functional L1 sequence of the third fibronectin type III domain of murine L1 was identified and conjugated to gold nanoparticles (AuNPs) to obtain constructs that interact homophilically with the extracellular domain of L1 and trigger the cognate beneficial L1-mediated functions. Covalent conjugation was achieved by reacting mixtures of two cysteine-terminated forms of this L1 peptide and thiolated poly(ethylene) glycol (PEG) ligands (~2.1 kDa) with citrate stabilized AuNPs of two different sizes (~14 and 40 nm in diameter). By varying the ratio of the L1 peptide-PEG mixtures, an optimized layer composition was achieved that resulted in the expected homophilic interaction of the AuNPs. These AuNPs were stable as tested over a time period of 30 days in artificial cerebrospinal fluid and interacted with the extracellular domain of L1 on neurons and Schwann cells, as could be shown by using cells from wild-type and L1-deficient mice. In vitro, the L1-derivatized particles promoted neurite outgrowth and survival of neurons from the central and peripheral nervous system and stimulated Schwann cell process formation and proliferation. These observations raise the hope that, in combination with other therapeutic approaches, L1 peptide-functionalized AuNPs may become a useful tool to ameliorate the deficits resulting from acute and chronic injuries of the mammalian nervous system.The neural cell adhesion molecule L1 is involved in nervous system development and promotes regeneration in animal models of acute and chronic injury of the adult nervous system. To translate these conducive functions into therapeutic approaches, a 22-mer peptide that encompasses a minimal and functional L1

  16. Design of 2-D Recursive Filters Using Self-adaptive Mutation Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Lianghong Wu

    2011-08-01

    Full Text Available This paper investigates a novel approach to the design of two-dimensional recursive digital filters using differential evolution (DE algorithm. The design task is reformulated as a constrained minimization problem and is solved by an Self-adaptive Mutation DE algorithm (SAMDE, which adopts an adaptive mutation operator that combines with the advantages of the DE/rand/1/bin strategy and the DE/best/2/bin strategy. As a result, its convergence performance is improved greatly. Numerical experiment results confirm the conclusion. The proposedSAMDE approach is effectively applied to test a numerical example and is compared with previous design methods. The computational experiments show that the SAMDE approach can obtain better results than previous design methods.

  17. Measurement of the proton structure function F{sub L}(x,Q{sup 2}) with the H1 detector at HERA

    Energy Technology Data Exchange (ETDEWEB)

    Piec, Sebastian

    2010-12-15

    A measurement of the inclusive cross section for the deep-inelastic scattering of positrons on protons at low four-momentum transfer squared Q{sup 2} is presented. The measurement is used for the extraction of the longitudinal proton structure function F{sub L}. The analysis is based on data collected by the H1 experiment during special, low energy runs in the year 2007. The direct technique of the F{sub L} determination based on the extraction of the reduced DIS cross sections for three different centre-of-mass energies is used. For the purpose of the analysis a dedicated electron finder has been developed and integrated with the standard H1 reconstruction software H1REC. The algorithm employs information from two independent tracking detectors the Backward Silicon Tracker and the Central Jet Chamber. The performance of the finder is studied. The thesis presents the cross section and the F{sub L} measurements in the range of 2.5 GeV{sup 2}{<=}Q{sup 2}{<=}25 GeV{sup 2}. (orig.)

  18. Preliminary evaluation of an algorithm to minimize the power error selection of an aspheric intraocular lens by optimizing the estimation of the corneal power and the effective lens position

    Directory of Open Access Journals (Sweden)

    David P. Piñero

    2016-06-01

    Full Text Available AIM: To evaluate the refractive predictability achieved with an aspheric intraocular lens(IOLand to develop a preliminary optimized algorithm for the calculation of its power(PIOL.METHODS: This study included 65 eyes implanted with the aspheric IOL LENTIS L-313(Oculentis GmbHthat were divided into 2 groups: 12 eyes(8 patientswith PIOL≥23.0 D(group A, and 53 eyes(35 patientswith PIOLIOLadjwas calculated considering a variable refractive index for corneal power estimation, the refractive outcome obtained, and an adjusted effective lens position(ELPadjaccording to age and anatomical factors. RESULTS: Postoperative spherical equivalent ranged from -0.75 to +0.75 D and from -1.38 to +0.75 D in groups A and B, respectively. No statistically significant differences were found in groups A(P=0.64and B(P=0.82between PIOLadj and the IOL power implanted(PIOLReal. The Bland and Altman analysis showed ranges of agreement between PIOLadj and PIOLReal of +1.11 to -0.96 D and +1.14 to -1.18 D in groups A and B, respectively. Clinically and statistically significant differences were found between PIOLadj and PIOL obtained with Hoffer Q and Holladay I formulas(PCONCLUSION: The refractive predictability of cataract surgery with implantation of an aspheric IOL can be optimized using paraxial optics combined with linear algorithms to minimize the error associated to the estimation of corneal power and ELP.

  19. Search for Minimal Standard Model and Minimal Supersymmetric Model Higgs Bosons in e+ e- Collisions with the OPAL detector at LEP

    International Nuclear Information System (INIS)

    Ganel, Ofer

    1993-06-01

    When LEP machine was turned on in August 1989, a new era had opened. For the first time, direct, model-independent searches for Higgs boson could be carried out. The Minimal Standard Model Higgs boson is expected to be produced in e + e - collisions via the H o Z o . The Minimal Supersymmetric Model Higgs boson are expected to be produced in the analogous e + e - -> h o Z o process or in pairs via the process e + e - -> h o A o . In this thesis we describe the search for Higgs bosons within the framework of the Minimal Standard Model and the Minimal Supersymmetric Model, using the data accumulated by the OPAL detector at LEP in the 1989, 1990, 1991 and part of the 1992 running periods at and around the Z o pole. An MInimal Supersymmetric Model Higgs boson generator is described as well as its use in several different searches. As a result of this work, the Minimal Standard Model Higgs boson mass is bounded from below by 54.2 GeV/c 2 at 95% C.L. This is, at present, the highest such bound. A novel method of overcoming the m τ and m s dependence of Minimal Supersymmetric Higgs boson production and decay introduced by one-loop radiative corrections is used to obtain model-independent exclusion. The thesis describes also an algorithm for off line identification of calorimeter noise in the OPAL detector. (author)

  20. An Adaptive Pruning Algorithm for the Discrete L-Curve Criterion

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Toke Koldborg; Rodriguez, Giuseppe

    2004-01-01

    SVD or regularizing CG iterations). Our algorithm needs no pre-defined parameters, and in order to capture the global features of the curve in an adaptive fashion, we use a sequence of pruned L-curves that correspond to considering the curves at different scales. We compare our new algorithm...

  1. Inducible targeting of CNS astrocytes in Aldh1l1-CreERT2 BAC transgenic mice [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jan Winchenbach

    2016-12-01

    Full Text Available Background: Studying astrocytes in higher brain functions has been hampered by the lack of genetic tools for the efficient expression of inducible Cre recombinase throughout the CNS, including the neocortex. Methods: Therefore, we generated BAC transgenic mice, in which CreERT2 is expressed under control of the Aldh1l1 regulatory region. Results: When crossbred to Cre reporter mice, adult Aldh1l1-CreERT2 mice show efficient gene targeting in astrocytes. No such Cre-mediated recombination was detectable in CNS neurons, oligodendrocytes, and microglia. As expected, Aldh1l1-CreERT2 expression was evident in several peripheral organs, including liver and kidney. Conclusions: Taken together, Aldh1l1-CreERT2 mice are a useful tool for studying astrocytes in neurovascular coupling, brain metabolism, synaptic plasticity and other aspects of neuron-glia interactions.

  2. Data of evolutionary structure change: 1AIFL-2AI0L [Confc[Archive

    Lifescience Database Archive (English)

    Full Text Available /ss_2> 0 n> 1AIF L...n> 1AIFLn> VSSSI----SSSNL 2AI0 Ln> 2AI0L...1AIFL-2AI0L 1AIF 2AI0 L L DIQLTQSPAFMAASPGEKVTITCSVSSSI----SSSNLH...line> SER CA 273 SER CA 260 ASN CA 337 LEU CA 408

  3. A Heuristic Scheduling Algorithm for Minimizing Makespan and Idle Time in a Nagare Cell

    Directory of Open Access Journals (Sweden)

    M. Muthukumaran

    2012-01-01

    Full Text Available Adopting a focused factory is a powerful approach for today manufacturing enterprise. This paper introduces the basic manufacturing concept for a struggling manufacturer with limited conventional resources, providing an alternative solution to cell scheduling by implementing the technique of Nagare cell. Nagare cell is a Japanese concept with more objectives than cellular manufacturing system. It is a combination of manual and semiautomatic machine layout as cells, which gives maximum output flexibility for all kind of low-to-medium- and medium-to-high- volume productions. The solution adopted is to create a dedicated group of conventional machines, all but one of which are already available on the shop floor. This paper focuses on the development of heuristic scheduling algorithm in step-by-step method. The algorithm states that the summation of processing time of all products on each machine is calculated first and then the sum of processing time is sorted by the shortest processing time rule to get the assignment schedule. Based on the assignment schedule Nagare cell layout is arranged for processing the product. In addition, this algorithm provides steps to determine the product ready time, machine idle time, and product idle time. And also the Gantt chart, the experimental analysis, and the comparative results are illustrated with five (1×8 to 5×8 scheduling problems. Finally, the objective of minimizing makespan and idle time with greater customer satisfaction is studied through.

  4. Effect of L1-ORF2 on senescence of GES-1 cells and its molecular mechanisms

    Directory of Open Access Journals (Sweden)

    Ying-nan LI

    2016-06-01

    Full Text Available Objective  To investigate the effect of long interspersed nuclear elements 1 open reading frame 2(L1-ORF2 gene on the senescence of GES-1 cells and its mechanism of molecular regulation. Methods  Cell culture of high glucose was used to construct stable model of senescent GES-1 cells. L1-ORF2 siRNA vector was constructed and then transfected into normal GES1 and senescent ones with liposome transfection reagents for transient expression. Forty eight hours after transfection, cell growth curves were drawn to show the speed of cell proliferation, flow cytometry was used to analyze the cell cycle, β-galactosidase staining to detect cell aging and Western blotting to detect the expressions of L1-ORF2, P53 and P21proteins. Results  Senescent GES-1 cell model and L1-ORF2 siRNA vector were constructed. Compared with negative control group, the L1-ORF2 expression decreased in normal and senescent GES-1 cells transfected with L1-ORF2 siRNA vector. There was a faster proliferation of senescent GES1 cells (P<0.05 and lower ratio of β-galactosidase (56% vs 69%, P<0.05 and G0/G1 phase (34.2% vs 39.3%, P<0.05 in senescent GES-1 cells transfected with L1-ORE2 siRNA vector than those transfected with negative control vector, while there was no obvious difference between normal GES-1 cells transfected with L1-ORF2 siRNA vector and negative control vector (P>0.05. P53 protein was expressed only in senescent GES-1 cell, while P21 protein was expressed in both normal and senescent GES-1 cells, and the latter had a higher expression level (P<0.05. The GES-1 cells transfected with L1-ORF2 siRNA vector showed lower expressions of P53 and P21 proteins than those transfected with negative control vector (P<0.05. Conclusions  L1-ORF2-siRNA vector could down-regulate the expression of L1-ORF2 protein in normal and senescent GES-1 cells and promote the proliferation of senescent GES-1 cells. P21 and P53 proteins participate in the process of L1-ORF2 regulating

  5. Reading in L2 (English) and L1 (Persian): An Investigation into Reverse Transfer of Reading Strategies

    Science.gov (United States)

    Talebi, Seyed Hassan

    2012-01-01

    This study investigates the effect of reading strategies instruction in L2 (English) on raising reading strategies awareness and use and reading ability of Iranian EFL learners in L2 (English) and L1 (Persian) as a result of transfer of reading strategies from L2 to L1. To this purpose, 120 students of intermediate and advanced English proficiency…

  6. Requirements for an "ideal" bilingual L1L2 translation- oriented ...

    African Journals Online (AJOL)

    The major aim of this article is to outline the requirements for an "ideal" bilingual L1L2 dictionary of the general vocabulary specifically designed for the purposes of professional translation. The article challenges three commonly accepted beliefs: (a) a bilingual dictionary equals a translation dictionary; (b) a bilingual ...

  7. 2-Phase NSGA II: An Optimized Reward and Risk Measurements Algorithm in Portfolio Optimization

    Directory of Open Access Journals (Sweden)

    Seyedeh Elham Eftekharian

    2017-11-01

    Full Text Available Portfolio optimization is a serious challenge for financial engineering and has pulled down special attention among investors. It has two objectives: to maximize the reward that is calculated by expected return and to minimize the risk. Variance has been considered as a risk measure. There are many constraints in the world that ultimately lead to a non–convex search space such as cardinality constraint. In conclusion, parametric quadratic programming could not be applied and it seems essential to apply multi-objective evolutionary algorithm (MOEA. In this paper, a new efficient multi-objective portfolio optimization algorithm called 2-phase NSGA II algorithm is developed and the results of this algorithm are compared with the NSGA II algorithm. It was found that 2-phase NSGA II significantly outperformed NSGA II algorithm.

  8. How does language distance between L1 and L2 affect the L2 brain network? An fMRI study of Korean-Chinese-English trilinguals.

    Science.gov (United States)

    Kim, Say Young; Qi, Ting; Feng, Xiaoxia; Ding, Guosheng; Liu, Li; Cao, Fan

    2016-04-01

    The present study tested the hypothesis that language distance between first language (L1) and second language (L2) influences the assimilation and accommodation pattern in Korean-Chinese-English trilinguals. The distance between English and Korean is smaller than that between Chinese and Korean in terms of orthographic transparency, because both English and Korean are alphabetic, whereas Chinese is logographic. During fMRI, Korean trilingual participants performed a visual rhyming judgment task in three languages (Korean: KK, Chinese: KC, English: KE). Two L1 control groups were native Chinese and English speakers performing the task in their native languages (CC and EE, respectively). The general pattern of brain activation of KC was more similar to that of CC than KK, suggesting accommodation. Higher accuracy in KC was associated with decreased activation in regions of the KK network, suggesting reduced assimilation. In contrast, the brain activation of KE was more similar to that of KK than EE, suggesting assimilation. Higher accuracy in KE was associated with decreased activation in regions of the EE network, suggesting reduced accommodation. Finally, an ROI analysis on the left middle frontal gyrus revealed greater activation for KC than for KE, suggesting its selective involvement in the L2 with more arbitrary mapping between orthography and phonology (i.e., Chinese). Taken together, the brain network involved in L2 reading is similar to the L1 network when L2 and L1 are similar in orthographic transparency, while significant accommodation is expected when L2 is more opaque than L1. Copyright © 2015. Published by Elsevier Inc.

  9. Stable 1-Norm Error Minimization Based Linear Predictors for Speech Modeling

    DEFF Research Database (Denmark)

    Giacobello, Daniele; Christensen, Mads Græsbøll; Jensen, Tobias Lindstrøm

    2014-01-01

    In linear prediction of speech, the 1-norm error minimization criterion has been shown to provide a valid alternative to the 2-norm minimization criterion. However, unlike 2-norm minimization, 1-norm minimization does not guarantee the stability of the corresponding all-pole filter and can generate...... saturations when this is used to synthesize speech. In this paper, we introduce two new methods to obtain intrinsically stable predictors with the 1-norm minimization. The first method is based on constraining the roots of the predictor to lie within the unit circle by reducing the numerical range...... based linear prediction for modeling and coding of speech....

  10. Formula I(1 and I(2: Race Tracks for Likelihood Maximization Algorithms of I(1 and I(2 Cointegrated VAR Models

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2017-11-01

    Full Text Available This paper provides some test cases, called circuits, for the evaluation of Gaussian likelihood maximization algorithms of the cointegrated vector autoregressive model. Both I(1 and I(2 models are considered. The performance of algorithms is compared first in terms of effectiveness, defined as the ability to find the overall maximum. The next step is to compare their efficiency and reliability across experiments. The aim of the paper is to commence a collective learning project by the profession on the actual properties of algorithms for cointegrated vector autoregressive model estimation, in order to improve their quality and, as a consequence, also the reliability of empirical research.

  11. Model-based minimization algorithm of a supercritical helium loop consumption subject to operational constraints

    Science.gov (United States)

    Bonne, F.; Bonnay, P.; Girard, A.; Hoa, C.; Lacroix, B.; Le Coz, Q.; Nicollet, S.; Poncet, J.-M.; Zani, L.

    2017-12-01

    Supercritical helium loops at 4.2 K are the baseline cooling strategy of tokamaks superconducting magnets (JT-60SA, ITER, DEMO, etc.). This loops work with cryogenic circulators that force a supercritical helium flow through the superconducting magnets in order that the temperature stay below the working range all along their length. This paper shows that a supercritical helium loop associated with a saturated liquid helium bath can satisfy temperature constraints in different ways (playing on bath temperature and on the supercritical flow), but that only one is optimal from an energy point of view (every Watt consumed at 4.2 K consumes at least 220 W of electrical power). To find the optimal operational conditions, an algorithm capable of minimizing an objective function (energy consumption at 5 bar, 5 K) subject to constraints has been written. This algorithm works with a supercritical loop model realized with the Simcryogenics [2] library. This article describes the model used and the results of constrained optimization. It will be possible to see that the changes in operating point on the temperature of the magnet (e.g. in case of a change in the plasma configuration) involves large changes on the cryodistribution optimal operating point. Recommendations will be made to ensure that the energetic consumption is kept as low as possible despite the changing operating point. This work is partially supported by EUROfusion Consortium through the Euratom Research and Training Program 20142018 under Grant 633053.

  12. The impact of language co-activation on L1 and L2 speech fluency.

    Science.gov (United States)

    Bergmann, Christopher; Sprenger, Simone A; Schmid, Monika S

    2015-10-01

    Fluent speech depends on the availability of well-established linguistic knowledge and routines for speech planning and articulation. A lack of speech fluency in late second-language (L2) learners may point to a deficiency of these representations, due to incomplete acquisition. Experiments on bilingual language processing have shown, however, that there are strong reasons to believe that multilingual speakers experience co-activation of the languages they speak. We have studied to what degree language co-activation affects fluency in the speech of bilinguals, comparing a monolingual German control group with two bilingual groups: 1) first-language (L1) attriters, who have fully acquired German before emigrating to an L2 English environment, and 2) immersed L2 learners of German (L1: English). We have analysed the temporal fluency and the incidence of disfluency markers (pauses, repetitions and self-corrections) in spontaneous film retellings. Our findings show that learners to speak more slowly than controls and attriters. Also, on each count, the speech of at least one of the bilingual groups contains more disfluency markers than the retellings of the control group. Generally speaking, both bilingual groups-learners and attriters-are equally (dis)fluent and significantly more disfluent than the monolingual speakers. Given that the L1 attriters are unaffected by incomplete acquisition, we interpret these findings as evidence for language competition during speech production. Copyright © 2015. Published by Elsevier B.V.

  13. L1 influence in the L2 acquisition of isiXhosa verb placement by ...

    African Journals Online (AJOL)

    1. Introduction. In second language (L2) acquisition research conducted within ..... versus Afrikaans-speaking beginner learners of isiXhosa, with regard to verb ..... Language contact within one home is also illustrated by the case of the four L1 ...

  14. Low-dose dual-energy cone-beam CT using a total-variation minimization algorithm

    International Nuclear Information System (INIS)

    Min, Jong Hwan

    2011-02-01

    Dual-energy cone-beam CT is an important imaging modality in diagnostic applications, and may also find its use in other application such as therapeutic image guidance. Despite of its clinical values, relatively high radiation dose of dual-energy scan may pose a challenge to its wide use. In this work, we investigated a low-dose, pre-reconstruction type of dual-energy cone-beam CT (CBCT) using a total-variation minimization algorithm for image reconstruction. An empirical dual-energy calibration method was used to prepare material-specific projection data. Raw data at high and low tube voltages are converted into a set of basis functions which can be linearly combined to produce material-specific data using the coefficients obtained through the calibration process. From much fewer views than are conventionally used, material specific images are reconstructed by use of the total-variation minimization algorithm. An experimental study was performed to demonstrate the feasibility of the proposed method using a micro-CT system. We have reconstructed images of the phantoms from only 90 projections acquired at tube voltages of 40 kVp and 90 kVp each. Aluminum-only and acryl-only images were successfully decomposed. We evaluated the quality of the reconstructed images by use of contrast-to-noise ratio and detectability. A low-dose dual-energy CBCT can be realized via the proposed method by greatly reducing the number of projections

  15. Rendezvous missions with minimoons from L1

    Science.gov (United States)

    Chyba, M.; Haberkorn, T.; Patterson, G.

    2014-07-01

    We propose to present asteroid capture missions with the so-called minimoons. Minimoons are small asteroids that are temporarily captured objects on orbits in the Earth-Moon system. It has been suggested that, despite their small capture probability, at any time there are one or two meter diameter minimoons, and progressively greater numbers at smaller diameters. The minimoons orbits differ significantly from elliptical orbits which renders a rendezvous mission more challenging, however they offer many advantages for such missions that overcome this fact. First, they are already on geocentric orbits which results in short duration missions with low Delta-v, this translates in cost efficiency and low-risk targets. Second, beside their close proximity to Earth, an advantage is their small size since it provides us with the luxury to retrieve the entire asteroid and not only a sample of material. Accessing the interior structure of a near-Earth satellite in its morphological context is crucial to an in-depth analysis of the structure of the asteroid. Historically, 2006 RH120 is the only minimoon that has been detected but work is ongoing to determine which modifications to current observation facilities is necessary to provide detection algorithm capabilities. In the event that detection is successful, an efficient algorithm to produce a space mission to rendezvous with the detected minimoon is highly desirable to take advantage of this opportunity. This is the main focus of our work. For the design of the mission we propose the following. The spacecraft is first placed in hibernation on a Lissajoux orbit around the liberation point L1 of the Earth-Moon system. We focus on eight-shaped Lissajoux orbits to take advantage of the stability properties of their invariant manifolds for our transfers since the cost to minimize is the spacecraft fuel consumption. Once a minimoon has been detected we must choose a point on its orbit to rendezvous (in position and velocities

  16. L2 Word Recognition: Influence of L1 Orthography on Multi-syllabic Word Recognition.

    Science.gov (United States)

    Hamada, Megumi

    2017-10-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on the position of an embedded word. The participants were Arabic ESL learners, Chinese ESL learners, and native speakers of English. The task was a word search task, in which the participants identified a target word embedded in a pseudoword at the initial, middle, or final position. The search accuracy and speed indicated that all groups showed a strong preference for the initial position. The accuracy data further indicated group differences. The Arabic group showed higher accuracy in the final than middle, while the Chinese group showed the opposite and the native speakers showed no difference between the two positions. The findings suggest that L2 multi-syllabic word recognition involves unique processes.

  17. Numerical simulations of the O(3) and CP1 models using the Langevin equations and the Metropolis algorithm

    International Nuclear Information System (INIS)

    Abdalla, E.; Carneiro, C.E.I.

    1988-12-01

    The O(3) model, the pure CP 1 model and the CP 1 model minimally coupled to fermions are numerically simulated. The equivalence between the O(3) and the bound state of the pure CP 1 model is investigated. It is shown that: the relations g O(3 ) = 2 g CP 1 and E O(3 )= 2E CP 1 + 2, for the coupling constants and energies hold beyond the classical level; the mass gap as a function of the coupling is the same for both models. The mass gap for the CP 1 minimally coupled to fermions is also calculated. The calculations are performed using different techniques. The proposal by Namiki and colaborators to enforce constraints on Langevin equations and Parisi's technique to calculate correlation functions via Langevin equations is tested. The results are compared with those obtained using the multi-hit Metropolis algorithm. (author) [pt

  18. Effect of modified atmosphere applied to minimally processed radicchio (Cichorium intybus L. submitted to different sanitizing treatments

    Directory of Open Access Journals (Sweden)

    Giuliana de Moura Pereira

    2014-09-01

    Full Text Available Stability of minimally processed radicchio (Cichorium intybus L. was evaluated under modified atmosphere (2% O2, 5% CO2, and 93% N2 on 3, 5, 7 and 10 days of storage at 5°C. The samples were hygienized in sodium hypochlorite or hydrogen peroxide solutions to identify the most effective sanitizing solution to remove microorganisms. Microbiological analysis was conducted to identify the presence of coliforms at 35°C and 45°C, mesophilic microorganisms, and yeast and mold. Physicochemical analyses of mass loss, pH, soluble solids, and total acidity were conducted. The color measurements were performed using a Portable Colorimeter model CR-400. The antioxidant activity was determined by 2,2-diphenyl-1-picrylhydrazyl and 2,2-azino-bis-3-ethylbenzothiazoline-6-sulfonic methods. The sensory evaluation was carried out using a hedonic scale to test overall acceptance of the samples during storage. The sodium hypochlorite (150 mg.L-1 solution provided greater safety to the final product. The values of pH ranged from 6.17 to 6.25, total acidity from 0.405 to 0.435%, soluble solids from 0.5 to 0.6 °Brix, mass loss from 1.7 to 7.2%, and chlorophyll from 1.068 to 0.854 mg/100g. The antioxidant activity of radicchio did not show significant changes during the first 3 days of storage. The overall acceptance of the sample stored in the sealed package without modified atmosphere was 70%, while the fresh sample was obtained 77% of approval. Although the samples packaged under modified atmosphere had a higher acceptance score, the samples in sealed packages had satisfactory results during the nine days of storage. The use of modified atmosphere, combined with cooling and good manufacturing practices, was sufficient to prolong the life of minimally processed radicchio, Folha Larga cultivar, for up to ten days of storage.

  19. Minimization and parameter estimation for seminorm regularization models with I-divergence constraints

    International Nuclear Information System (INIS)

    Teuber, T; Steidl, G; Chan, R H

    2013-01-01

    In this paper, we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback–Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal–dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. (paper)

  20. Mixed Total Variation and L1 Regularization Method for Optical Tomography Based on Radiative Transfer Equation

    Directory of Open Access Journals (Sweden)

    Jinping Tang

    2017-01-01

    Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.

  1. Alternative sanitization methods for minimally processed lettuce in comparison to sodium hypochlorite.

    Science.gov (United States)

    Bachelli, Mara Lígia Biazotto; Amaral, Rívia Darla Álvares; Benedetti, Benedito Carlos

    2013-01-01

    Lettuce is a leafy vegetable widely used in industry for minimally processed products, in which the step of sanitization is the crucial moment for ensuring a safe food for consumption. Chlorinated compounds, mainly sodium hypochlorite, are the most used in Brazil, but the formation of trihalomethanes from this sanitizer is a drawback. Then, the search for alternative methods to sodium hypochlorite has been emerging as a matter of great interest. The suitability of chlorine dioxide (60 mg L(-1)/10 min), peracetic acid (100 mg L(-1)/15 min) and ozonated water (1.2 mg L(-1)/1 min) as alternative sanitizers to sodium hypochlorite (150 mg L(-1) free chlorine/15 min) were evaluated. Minimally processed lettuce washed with tap water for 1 min was used as a control. Microbiological analyses were performed in triplicate, before and after sanitization, and at 3, 6, 9 and 12 days of storage at 2 ± 1 °C with the product packaged on LDPE bags of 60 μm. It was evaluated total coliforms, Escherichia coli, Salmonella spp., psicrotrophic and mesophilic bacteria, yeasts and molds. All samples of minimally processed lettuce showed absence of E. coli and Salmonella spp. The treatments of chlorine dioxide, peracetic acid and ozonated water promoted reduction of 2.5, 1.1 and 0.7 log cycle, respectively, on count of microbial load of minimally processed product and can be used as substitutes for sodium hypochlorite. These alternative compounds promoted a shelf-life of six days to minimally processed lettuce, while the shelf-life with sodium hypochlorite was 12 days.

  2. Tydskrif vir letterkunde - Vol 53, No 1 (2016)

    African Journals Online (AJOL)

    Framing homosexual identities in Cameroonian literature · EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. Frieda Ekotto, 128-137. http://dx.doi.org/10.4314/tvl.v53i1.8 ...

  3. L2-L1 Translation Priming Effects in a Lexical Decision Task: Evidence From Low Proficient Korean-English Bilinguals

    Directory of Open Access Journals (Sweden)

    Yoonhyoung Lee

    2018-03-01

    Full Text Available One of the key issues in bilingual lexical representation is whether L1 processing is facilitated by L2 words. In this study, we conducted two experiments using the masked priming paradigm to examine how L2-L1 translation priming effects emerge when unbalanced, low proficiency, Korean-English bilinguals performed a lexical decision task. In Experiment 1, we used a 150 ms SOA (50 ms prime duration followed by a blank interval of 100 ms and found a significant L2-L1 translation priming effect. In contrast, in Experiment 2, we used a 60 ms SOA (50 ms prime duration followed by a blank interval of 10 ms and found a null effect of L2-L1 translation priming. This finding is the first demonstration of a significant L2-L1 translation priming effect with unbalanced Korean-English bilinguals. Implications of this work are discussed with regard to bilingual word recognition models.

  4. Electrical sounding data inversion with minimum L-1 norm; Inversao de dados de sondagem eletrica minimizando a norma L1

    Energy Technology Data Exchange (ETDEWEB)

    Marinho, Jose Marcio Lins [Bahia Univ., Salvador, BA (Brazil). Inst. de Geociencias. Programa de Pesquisa e Pos-graduacao em Geociencias]|[Ceara Univ., Fortaleza, CE (Brazil). Dept. de Geologia; Lima, Olivar Antonio Lima de [Bahia Univ., Salvador, BA (Brazil). Inst. de Geociencias. Programa de Pesquisa e Pos-graduacao em Geociencias

    1995-12-31

    The steepest descent and the damped least squares are the most used methods for inverting electrical sounding data. Normally, such inversions are made in the apparent resistivity domain using the norm L{sub 2}. For one dimensional earth models we present a new inversion scheme based on the minimization of norm L{sub 1} applied in the logarithmic domain. In the steepest descent case, the implementation of the L{sub 1} norm consists essentially in computing partial derivatives of the objective function with respect to the model parameters. In the second case, referred here as the damped least absolute deviation, the implementation is done via an interactively reweighed least square procedure. Several inversions were done for the Schlumberger array using both theoretical and field data. The results are very promising and attest the robustness of the L{sub 1} norm. A comparative performance study of the both norms are in progress using more than 140 Schlumberger soundings obtained in a groundwater exploration program in the area of Itarema-Acarau, Ceara State, Brazil. (author). 9 refs., 3 figs

  5. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    Science.gov (United States)

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  6. Conflict Detection Algorithm to Minimize Locking for MPI-IO Atomicity

    Science.gov (United States)

    Sehrish, Saba; Wang, Jun; Thakur, Rajeev

    Many scientific applications require high-performance concurrent I/O accesses to a file by multiple processes. Those applications rely indirectly on atomic I/O capabilities in order to perform updates to structured datasets, such as those stored in HDF5 format files. Current support for atomicity in MPI-IO is provided by locking around the operations, imposing lock overhead in all situations, even though in many cases these operations are non-overlapping in the file. We propose to isolate non-overlapping accesses from overlapping ones in independent I/O cases, allowing the non-overlapping ones to proceed without imposing lock overhead. To enable this, we have implemented an efficient conflict detection algorithm in MPI-IO using MPI file views and datatypes. We show that our conflict detection scheme incurs minimal overhead on I/O operations, making it an effective mechanism for avoiding locks when they are not needed.

  7. A Hybrid Genetic Algorithm to Minimize Total Tardiness for Unrelated Parallel Machine Scheduling with Precedence Constraints

    Directory of Open Access Journals (Sweden)

    Chunfeng Liu

    2013-01-01

    Full Text Available The paper presents a novel hybrid genetic algorithm (HGA for a deterministic scheduling problem where multiple jobs with arbitrary precedence constraints are processed on multiple unrelated parallel machines. The objective is to minimize total tardiness, since delays of the jobs may lead to punishment cost or cancellation of orders by the clients in many situations. A priority rule-based heuristic algorithm, which schedules a prior job on a prior machine according to the priority rule at each iteration, is suggested and embedded to the HGA for initial feasible schedules that can be improved in further stages. Computational experiments are conducted to show that the proposed HGA performs well with respect to accuracy and efficiency of solution for small-sized problems and gets better results than the conventional genetic algorithm within the same runtime for large-sized problems.

  8. Genetic variability in L1 and L2 genes of HPV-16 and HPV-58 in Southwest China.

    Directory of Open Access Journals (Sweden)

    Yaofei Yue

    Full Text Available HPV account for most of the incidence of cervical cancer. Approximately 90% of anal cancers and a smaller subset (<50% of other cancers (oropharyngeal, penile, vaginal, vulvar are also attributed to HPV. The L1 protein comprising HPV vaccine formulations elicits high-titre neutralizing antibodies and confers type restricted protection. The L2 protein is a promising candidate for a broadly protective HPV vaccine. In our previous study, we found the most prevalent high-risk HPV infectious serotypes were HPV-16 and HPV-58 among women of Southwest China. To explore gene polymorphisms and intratypic variations of HPV-16 and HPV-58 L1/L2 genes originating in Southwest China, HPV-16 (L1: n = 31, L2: n = 28 and HPV-58 (L1: n = 21, L2: n = 21 L1/L2 genes were sequenced and compared to others described and submitted to GenBank. Phylogenetic trees were then constructed by Neighbor-Joining and the Kimura 2-parameters methods (MEGA software, followed by an analysis of the diversity of secondary structure. Then selection pressures acting on the L1/L2 genes were estimated by PAML software. Twenty-nine single nucleotide changes were observed in HPV-16 L1 sequences with 16/29 non-synonymous mutations and 13/29 synonymous mutations (six in alpha helix and two in beta turns. Seventeen single nucleotide changes were observed in HPV-16 L2 sequences with 8/17 non-synonymous mutations (one in beta turn and 9/17 synonymous mutations. Twenty-four single nucleotide changes were observed in HPV-58 L1 sequences with 10/24 non-synonymous mutations and 14/24 synonymous mutations (eight in alpha helix and four in beta turn. Seven single nucleotide changes were observed in HPV-58 L2 sequences with 4/7 non-synonymous mutations and 3/7 synonymous mutations. The result of selective pressure analysis showed that most of these mutations were of positive selection. This study may help understand the intrinsic geographical relatedness and biological differences of HPV-16/HPV-58 and

  9. Approximated Function Based Spectral Gradient Algorithm for Sparse Signal Recovery

    Directory of Open Access Journals (Sweden)

    Weifeng Wang

    2014-02-01

    Full Text Available Numerical algorithms for the l0-norm regularized non-smooth non-convex minimization problems have recently became a topic of great interest within signal processing, compressive sensing, statistics, and machine learning. Nevertheless, the l0-norm makes the problem combinatorial and generally computationally intractable. In this paper, we construct a new surrogate function to approximate l0-norm regularization, and subsequently make the discrete optimization problem continuous and smooth. Then we use the well-known spectral gradient algorithm to solve the resulting smooth optimization problem. Experiments are provided which illustrate this method is very promising.

  10. Feasibility of minimally invasive radical prostatectomy in prostate cancer patients with high prostate-specific antigen. Feasibility and 1-year outcomes

    International Nuclear Information System (INIS)

    Do, M.; Ragavan, N.; Dietel, A.; Liatsikos, E.; Stolzenburg, J.U.; Anderson, C.; McNeill, A.

    2012-01-01

    Urologists are cautious to offer minimally invasive radical prostatectomy in prostate cancer patients with high prostate-specific antigen (and therefore anticipated to have locally advanced or metastatic disease) because of concerns regarding lack of complete cure after minimally invasive radical prostatectomy and of worsening of continence if adjuvant radiotherapy is used. A retrospective review of our institutional database was carried out to identify patients with prostate specific antigen (PSA) ≥20 ng/mL who underwent minimally invasive radical prostatectomy between January 2002 and October 2010. Intraoperative, pathological, functional and short-term oncological outcomes were assessed. Overall, 233 patients met study criteria and were included in the analysis. The median prostate-specific antigen and prostate size were 28.5 ng/mL and 47 mL, respectively. Intraoperative complications were the following: rectal injury (0.86%) and blood transfusion (1.7%). Early postoperative complications included prolonged (>6 days) catheterization (9.4%), hematoma (4.7%), deep venous thrombosis (0.86%) and lymphocele (5.1%). Late postoperative complications included cerebrovascular accident (0.4%) and anastomotic stricture (0.8%). Pathology revealed poorly differentiated cancer in 48.9%, pT3/pT4 disease in 55.8%, positive margins in 28.3% and lymph node disease in 20.2% of the cases. Adverse pathological findings were more frequent in patients with prostate-specific antigen >40 ng/mL and (or) in those with locally advanced disease (pT3/pT4). In 62.2% of the cases, adjuvant radiotherapy was used. At 1-year follow up, 80% of patients did not show evidence of biochemical recurrence and 98.8% of them had good recovery of continence. Minimally invasive radical prostatectomy might represent a reasonable option in prostate cancer patients with high prostate-specific antigen as a part of a multimodality treatment approach. (author)

  11. Spectral L2/L1 norm: A new perspective for spectral kurtosis for characterizing non-stationary signals

    Science.gov (United States)

    Wang, Dong

    2018-05-01

    Thanks to the great efforts made by Antoni (2006), spectral kurtosis has been recognized as a milestone for characterizing non-stationary signals, especially bearing fault signals. The main idea of spectral kurtosis is to use the fourth standardized moment, namely kurtosis, as a function of spectral frequency so as to indicate how repetitive transients caused by a bearing defect vary with frequency. Moreover, spectral kurtosis is defined based on an analytic bearing fault signal constructed from either a complex filter or Hilbert transform. On the other hand, another attractive work was reported by Borghesani et al. (2014) to mathematically reveal the relationship between the kurtosis of an analytical bearing fault signal and the square of the squared envelope spectrum of the analytical bearing fault signal for explaining spectral correlation for quantification of bearing fault signals. More interestingly, it was discovered that the sum of peaks at cyclic frequencies in the square of the squared envelope spectrum corresponds to the raw 4th order moment. Inspired by the aforementioned works, in this paper, we mathematically show that: (1) spectral kurtosis can be decomposed into squared envelope and squared L2/L1 norm so that spectral kurtosis can be explained as spectral squared L2/L1 norm; (2) spectral L2/L1 norm is formally defined for characterizing bearing fault signals and its two geometrical explanations are made; (3) spectral L2/L1 norm is proportional to the square root of the sum of peaks at cyclic frequencies in the square of the squared envelope spectrum; (4) some extensions of spectral L2/L1 norm for characterizing bearing fault signals are pointed out.

  12. A novel PKD2L1 C-terminal domain critical for trimerization and channel function.

    Science.gov (United States)

    Zheng, Wang; Hussein, Shaimaa; Yang, JungWoo; Huang, Jun; Zhang, Fan; Hernandez-Anzaldo, Samuel; Fernandez-Patron, Carlos; Cao, Ying; Zeng, Hongbo; Tang, Jingfeng; Chen, Xing-Zhen

    2015-03-30

    As a transient receptor potential (TRP) superfamily member, polycystic kidney disease 2-like-1 (PKD2L1) is also called TRPP3 and has similar membrane topology as voltage-gated cation channels. PKD2L1 is involved in hedgehog signaling, intestinal development, and sour tasting. PKD2L1 and PKD1L3 form heterotetramers with 3:1 stoichiometry. C-terminal coiled-coil-2 (CC2) domain (G699-W743) of PKD2L1 was reported to be important for its trimerization but independent studies showed that CC2 does not affect PKD2L1 channel function. It thus remains unclear how PKD2L1 proteins oligomerize into a functional channel. By SDS-PAGE, blue native PAGE and mutagenesis we here identified a novel C-terminal domain called C1 (K575-T622) involved in stronger homotrimerization than the non-overlapping CC2, and found that the PKD2L1 N-terminus is critical for dimerization. By electrophysiology and Xenopus oocyte expression, we found that C1, but not CC2, is critical for PKD2L1 channel function. Our co-immunoprecipitation and dynamic light scattering experiments further supported involvement of C1 in trimerization. Further, C1 acted as a blocking peptide that inhibits PKD2L1 trimerization as well as PKD2L1 and PKD2L1/PKD1L3 channel function. Thus, our study identified C1 as the first PKD2L1 domain essential for both PKD2L1 trimerization and channel function, and suggest that PKD2L1 and PKD2L1/PKD1L3 channels share the PKD2L1 trimerization process.

  13. Neural overlap of L1 and L2 semantic representations in speech: A decoding approach.

    Science.gov (United States)

    Van de Putte, Eowyn; De Baene, Wouter; Brass, Marcel; Duyck, Wouter

    2017-11-15

    Although research has now converged towards a consensus that both languages of a bilingual are represented in at least partly shared systems for language comprehension, it remains unclear whether both languages are represented in the same neural populations for production. We investigated the neural overlap between L1 and L2 semantic representations of translation equivalents using a production task in which the participants had to name pictures in L1 and L2. Using a decoding approach, we tested whether brain activity during the production of individual nouns in one language allowed predicting the production of the same concepts in the other language. Because both languages only share the underlying semantic representation (sensory and lexical overlap was maximally avoided), this would offer very strong evidence for neural overlap in semantic representations of bilinguals. Based on the brain activation for the individual concepts in one language in the bilateral occipito-temporal cortex and the inferior and the middle temporal gyrus, we could accurately predict the equivalent individual concepts in the other language. This indicates that these regions share semantic representations across L1 and L2 word production. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Synthesis and characterization of iron(III), manganese(II), cobalt(II), nickel(II), copper(II) and zinc(II) complexes of salicylidene-N-anilinoacetohydrazone (H2L1) and 2-hydroxy-1-naphthylidene-N-anilinoacetohydrazone (H2L2).

    Science.gov (United States)

    AbouEl-Enein, S A; El-Saied, F A; Kasher, T I; El-Wardany, A H

    2007-07-01

    Salicylidene-N-anilinoacetohydrazone (H(2)L(1)) and 2-hydroxy-1-naphthylidene-N-anilinoacetohydrazone (H(2)L(2)) and their iron(III), manganese(II), cobalt(II), nickel(II), copper(II) and zinc(II) complexes have been synthesized and characterized by IR, electronic spectra, molar conductivities, magnetic susceptibilities and ESR. Mononuclear complexes are formed with molar ratios of 1:1, 1:2 and 1:3 (M:L). The IR studies reveal various modes of chelation. The electronic absorption spectra and magnetic susceptibility measurements show that the iron(III), nickel(II) and cobalt(II) complexes of H(2)L(1) have octahedral geometry. While the cobalt(II) complexes of H(2)L(2) were separated as tetrahedral structure. The copper(II) complexes have square planar stereochemistry. The ESR parameters of the copper(II) complexes at room temperature were calculated. The g values for copper(II) complexes proved that the Cu-O and Cu-N bonds are of high covalency.

  15. On the effects of quantization on mismatched pulse compression filters designed using L-p norm minimization techniques

    CSIR Research Space (South Africa)

    Cilliers, Jacques E

    2007-10-01

    Full Text Available In [1] the authors introduced a technique for generating mismatched pulse compression filters for linear frequency chirp signals. The technique minimizes the sum of the pulse compression sidelobes in a p L –norm sense. It was shown that extremely...

  16. Data of evolutionary structure change: 1AIFA-2AI0L [Confc[Archive

    Lifescience Database Archive (English)

    Full Text Available 2> 0 n> 1AIF n>A ...n>1AIFAn> VSSSI----SSSNL...n> 2AI0 n>L 2AI...1AIFA-2AI0L 1AIF 2AI0 A L DIQLTQSPAFMAASPGEKVTITCSVSSSI----SSSNLH...SER CA 251 SER CA 276 SER CA 258 ASN CA 337 LEU CA 410

  17. Alternative sanitization methods for minimally processed lettuce in comparison to sodium hypochlorite

    Directory of Open Access Journals (Sweden)

    Mara Lígia Biazotto Bachelli

    2013-09-01

    Full Text Available Lettuce is a leafy vegetable widely used in industry for minimally processed products, in which the step of sanitization is the crucial moment for ensuring a safe food for consumption. Chlorinated compounds, mainly sodium hypochlorite, are the most used in Brazil, but the formation of trihalomethanes from this sanitizer is a drawback. Then, the search for alternative methods to sodium hypochlorite has been emerging as a matter of great interest. The suitability of chlorine dioxide (60 mg L-1/10 min, peracetic acid (100 mg L-1/15 min and ozonated water (1.2 mg L-1 /1 min as alternative sanitizers to sodium hypochlorite (150 mg L-1 free chlorine/15 min were evaluated. Minimally processed lettuce washed with tap water for 1 min was used as a control. Microbiological analyses were performed in triplicate, before and after sanitization, and at 3, 6, 9 and 12 days of storage at 2 ± 1 ºC with the product packaged on LDPE bags of 60 µm. It was evaluated total coliforms, Escherichia coli, Salmonella spp., psicrotrophic and mesophilic bacteria, yeasts and molds. All samples of minimally processed lettuce showed absence of E. coli and Salmonella spp. The treatments of chlorine dioxide, peracetic acid and ozonated water promoted reduction of 2.5, 1.1 and 0.7 log cycle, respectively, on count of microbial load of minimally processed product and can be used as substitutes for sodium hypochlorite. These alternative compounds promoted a shelf-life of six days to minimally processed lettuce, while the shelf-life with sodium hypochlorite was 12 days.

  18. Isothermal (vapour + liquid) equilibrium for the binary {l_brace}1,1,2,2-tetrafluoroethane (R134) + propane (R290){r_brace} and {l_brace}1,1,2,2-tetrafluoroethane (R134) + isobutane (R600a){r_brace} systems

    Energy Technology Data Exchange (ETDEWEB)

    Dong Xueqiang [Key Laboratory of Cryogenics, Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, P.O. Box 2711, Beijing 100190 (China); Graduate University of Chinese Academy of Sciences, Beijing 100039 (China); Gong Maoqiong, E-mail: gongmq@mail.ipc.ac.c [Key Laboratory of Cryogenics, Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, P.O. Box 2711, Beijing 100190 (China); Liu Junsheng [Key Laboratory of Cryogenics, Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, P.O. Box 2711, Beijing 100190 (China); Graduate University of Chinese Academy of Sciences, Beijing 100039 (China); Wu Jianfeng, E-mail: jfwu@mail.ipc.ac.c [Key Laboratory of Cryogenics, Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, P.O. Box 2711, Beijing 100190 (China)

    2010-09-15

    (Vapour + liquid) equilibrium (VLE) data for the binary systems of {l_brace}1,1,2,2-tetrafluoroethane (R134) + propane (R290){r_brace} and {l_brace}1,1,2,2-tetrafluoroethane (R134) + isobutane (R600a){r_brace} were measured with a recirculation method at the temperatures ranging from (263.15 to 278.15) K and (268.15 to 288.15) K, respectively. All of the data were correlated by the Peng-Robinson (PR) equation of state (EoS) with the Huron-Vidal (HV) mixing rules utilizing the non-random two-liquid (NRTL) activity coefficient model. Good agreement can be found between the experimental data and the correlated results. Azeotropic behaviour can be found at the measured temperature ranges for these two mixtures.

  19. Hybrid genetic algorithm for minimizing non productive machining ...

    African Journals Online (AJOL)

    Minimization of non-productive time of tool during machining for 2.5 D milling significantly reduces the machining cost. The tool gets retracted and repositioned several times in multi pocket jobs during rough machining which consumes 15 to 30% of total machining time depending on the complexity of job. The automatic ...

  20. Flexible Job-Shop Scheduling with Dual-Resource Constraints to Minimize Tardiness Using Genetic Algorithm

    Science.gov (United States)

    Paksi, A. B. N.; Ma'ruf, A.

    2016-02-01

    In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.

  1. Synergies of carvacrol and 1,8-cineole to inhibit bacteria associated with minimally processed vegetables.

    Science.gov (United States)

    de Sousa, Jossana Pereira; de Azerêdo, Geíza Alves; de Araújo Torres, Rayanne; da Silva Vasconcelos, Margarida Angélica; da Conceição, Maria Lúcia; de Souza, Evandro Leite

    2012-03-15

    This study assessed the occurrence of an enhancing inhibitory effect of the combined application of carvacrol and 1,8-cineole against bacteria associated with minimally processed vegetables using the determination of Fractional Inhibitory Concentration (FIC) index, time-kill assay in vegetable broth and application in vegetable matrices. Their effects, individually and in combination, on the sensory characteristics of the vegetables were also determined. Carvacrol and 1,8-cineole displayed Minimum Inhibitory Concentration (MIC) in a range of 0.6-2.5 and 5-20 μL/mL, respectively, against the organisms studied. FIC indices of the combined application of the compounds were 0.25 against Listeria monocytogenes, Aeromonas hydrophila and Pseudomonas fluorescens, suggesting a synergic interaction. Application of carvacrol and 1,8-cineole alone (MIC) or in a mixture (1/8 MIC+1/8 MIC or 1/4 MIC+1/4 MIC) in vegetable broth caused a significant decrease (pvegetable broth and in experimentally inoculated fresh-cut vegetables. A similar efficacy was observed in the reduction of naturally occurring microorganisms in vegetables. Sensory evaluation revealed that the scores of the most-evaluated attributes fell between "like slightly" and "neither like nor dislike." The combination of carvacrol and 1,8-cineole at sub-inhibitory concentrations could constitute an interesting approach to sanitizing minimally processed vegetables. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. $L_{0}$ Gradient Projection.

    Science.gov (United States)

    Ono, Shunsuke

    2017-04-01

    Minimizing L 0 gradient, the number of the non-zero gradients of an image, together with a quadratic data-fidelity to an input image has been recognized as a powerful edge-preserving filtering method. However, the L 0 gradient minimization has an inherent difficulty: a user-given parameter controlling the degree of flatness does not have a physical meaning since the parameter just balances the relative importance of the L 0 gradient term to the quadratic data-fidelity term. As a result, the setting of the parameter is a troublesome work in the L 0 gradient minimization. To circumvent the difficulty, we propose a new edge-preserving filtering method with a novel use of the L 0 gradient. Our method is formulated as the minimization of the quadratic data-fidelity subject to the hard constraint that the L 0 gradient is less than a user-given parameter α . This strategy is much more intuitive than the L 0 gradient minimization because the parameter α has a clear meaning: the L 0 gradient value of the output image itself, so that one can directly impose a desired degree of flatness by α . We also provide an efficient algorithm based on the so-called alternating direction method of multipliers for computing an approximate solution of the nonconvex problem, where we decompose it into two subproblems and derive closed-form solutions to them. The advantages of our method are demonstrated through extensive experiments.

  3. Combination of minimal processing and irradiation to improve the microbiological safety of lettuce (Lactuca sativa, L.)

    International Nuclear Information System (INIS)

    Goularte, L.; Martins, C.G.; Morales-Aizpurua, I.C.; Destro, M.T.; Franco, B.D.G.M.; Vizeu, D.M.; Hutzler, B.W.; Landgraf, M.

    2004-01-01

    The feasibility of gamma radiation in combination with minimal processing (MP) to reduce the number of Salmonella spp. and Escherichia coli O157:H7 in iceberg lettuce (Lactuca sativa, L.) (shredded) was studied in order to increase the safety of the product. The reduction of the microbial population during the processing, the D 10 -values for Salmonella spp. and E. coli O157:H7 inoculated on shredded iceberg lettuce as well as the sensory evaluation of the irradiated product were evaluated. The immersion in chlorine (200 ppm) reduced coliform and aerobic mesophilic microorganisms by 0.9 and 2.7 log, respectively. D-values varied from 0.16 to 0.23 kGy for Salmonella spp. and from 0.11 to 0.12 kGy for E. coli O157:H7. Minimally processed iceberg lettuce exposed to 0.9 kGy does not show any change in sensory attributes. However, the texture of the vegetable was affected during the exposition to 1.1 kGy. The exposition of MP iceberg lettuce to 0.7 kGy reduced the population of Salmonella spp. by 4.0 log and E. coli by 6.8 log without impairing the sensory attributes. The combination of minimal process and gamma radiation to improve the safety of iceberg lettuce is feasible if good hygiene practices begins at farm stage

  4. Combination of minimal processing and irradiation to improve the microbiological safety of lettuce (Lactuca sativa, L.)

    Energy Technology Data Exchange (ETDEWEB)

    Goularte, L.; Martins, C.G.; Morales-Aizpurua, I.C.; Destro, M.T.; Franco, B.D.G.M.; Vizeu, D.M.; Hutzler, B.W.; Landgraf, M. E-mail: landgraf@usp.br

    2004-10-01

    The feasibility of gamma radiation in combination with minimal processing (MP) to reduce the number of Salmonella spp. and Escherichia coli O157:H7 in iceberg lettuce (Lactuca sativa, L.) (shredded) was studied in order to increase the safety of the product. The reduction of the microbial population during the processing, the D{sub 10}-values for Salmonella spp. and E. coli O157:H7 inoculated on shredded iceberg lettuce as well as the sensory evaluation of the irradiated product were evaluated. The immersion in chlorine (200 ppm) reduced coliform and aerobic mesophilic microorganisms by 0.9 and 2.7 log, respectively. D-values varied from 0.16 to 0.23 kGy for Salmonella spp. and from 0.11 to 0.12 kGy for E. coli O157:H7. Minimally processed iceberg lettuce exposed to 0.9 kGy does not show any change in sensory attributes. However, the texture of the vegetable was affected during the exposition to 1.1 kGy. The exposition of MP iceberg lettuce to 0.7 kGy reduced the population of Salmonella spp. by 4.0 log and E. coli by 6.8 log without impairing the sensory attributes. The combination of minimal process and gamma radiation to improve the safety of iceberg lettuce is feasible if good hygiene practices begins at farm stage.

  5. Combination of minimal processing and irradiation to improve the microbiological safety of lettuce ( Lactuca sativa, L.)

    Science.gov (United States)

    Goularte, L.; Martins, C. G.; Morales-Aizpurúa, I. C.; Destro, M. T.; Franco, B. D. G. M.; Vizeu, D. M.; Hutzler, B. W.; Landgraf, M.

    2004-09-01

    The feasibility of gamma radiation in combination with minimal processing (MP) to reduce the number of Salmonella spp. and Escherichia coli O157:H7 in iceberg lettuce ( Lactuca sativa, L.) (shredded) was studied in order to increase the safety of the product. The reduction of the microbial population during the processing, the D10-values for Salmonella spp. and E. coli O157:H7 inoculated on shredded iceberg lettuce as well as the sensory evaluation of the irradiated product were evaluated. The immersion in chlorine (200 ppm) reduced coliform and aerobic mesophilic microorganisms by 0.9 and 2.7 log, respectively. D-values varied from 0.16 to 0.23 kGy for Salmonella spp. and from 0.11 to 0.12 kGy for E. coli O157:H7. Minimally processed iceberg lettuce exposed to 0.9 kGy does not show any change in sensory attributes. However, the texture of the vegetable was affected during the exposition to 1.1 kGy. The exposition of MP iceberg lettuce to 0.7 kGy reduced the population of Salmonella spp. by 4.0 log and E. coli by 6.8 log without impairing the sensory attributes. The combination of minimal process and gamma radiation to improve the safety of iceberg lettuce is feasible if good hygiene practices begins at farm stage.

  6. Approximate k-NN delta test minimization method using genetic algorithms: Application to time series

    CERN Document Server

    Mateo, F; Gadea, Rafael; Sovilj, Dusan

    2010-01-01

    In many real world problems, the existence of irrelevant input variables (features) hinders the predictive quality of the models used to estimate the output variables. In particular, time series prediction often involves building large regressors of artificial variables that can contain irrelevant or misleading information. Many techniques have arisen to confront the problem of accurate variable selection, including both local and global search strategies. This paper presents a method based on genetic algorithms that intends to find a global optimum set of input variables that minimize the Delta Test criterion. The execution speed has been enhanced by substituting the exact nearest neighbor computation by its approximate version. The problems of scaling and projection of variables have been addressed. The developed method works in conjunction with MATLAB's Genetic Algorithm and Direct Search Toolbox. The goodness of the proposed methodology has been evaluated on several popular time series examples, and also ...

  7. Mapping the EORTC QLQ-C30 onto the EQ-5D-3L: assessing the external validity of existing mapping algorithms.

    Science.gov (United States)

    Doble, Brett; Lorgelly, Paula

    2016-04-01

    To determine the external validity of existing mapping algorithms for predicting EQ-5D-3L utility values from EORTC QLQ-C30 responses and to establish their generalizability in different types of cancer. A main analysis (pooled) sample of 3560 observations (1727 patients) and two disease severity patient samples (496 and 93 patients) with repeated observations over time from Cancer 2015 were used to validate the existing algorithms. Errors were calculated between observed and predicted EQ-5D-3L utility values using a single pooled sample and ten pooled tumour type-specific samples. Predictive accuracy was assessed using mean absolute error (MAE) and standardized root-mean-squared error (RMSE). The association between observed and predicted EQ-5D utility values and other covariates across the distribution was tested using quantile regression. Quality-adjusted life years (QALYs) were calculated using observed and predicted values to test responsiveness. Ten 'preferred' mapping algorithms were identified. Two algorithms estimated via response mapping and ordinary least-squares regression using dummy variables performed well on number of validation criteria, including accurate prediction of the best and worst QLQ-C30 health states, predicted values within the EQ-5D tariff range, relatively small MAEs and RMSEs, and minimal differences between estimated QALYs. Comparison of predictive accuracy across ten tumour type-specific samples highlighted that algorithms are relatively insensitive to grouping by tumour type and affected more by differences in disease severity. Two of the 'preferred' mapping algorithms suggest more accurate predictions, but limitations exist. We recommend extensive scenario analyses if mapped utilities are used in cost-utility analyses.

  8. L2 Acquisition of Prosodic Properties of Speech Rhythm: Evidence from L1 Mandarin and German Learners of English

    Science.gov (United States)

    Li, Aike; Post, Brechtje

    2014-01-01

    This study examines the development of speech rhythm in second language (L2) learners of typologically different first languages (L1s) at different levels of proficiency. An empirical investigation of durational variation in L2 English productions by L1 Mandarin learners and L1 German learners compared to native control values in English and the…

  9. Sweet Spot Control of 1:2 Array Antenna using A Modified Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Kyo-Hwan HYUN

    2007-10-01

    Full Text Available This paper presents a novel scheme that quickly searches for the sweet spot of 1:2 array antennas, and locks on to it for high-speed millimeter wavelength transmissions, when communications to another antenna array are disconnected. The proposed method utilizes a modified genetic algorithm, which selects a superior initial group through preprocessing in order to solve the local solution in a genetic algorithm. TDD (Time Division Duplex is utilized as the transfer method and data controller for the antenna. Once the initial communication is completed for the specific number of individuals, no longer antenna's data will be transmitted until each station processes GA in order to produce the next generation. After reproduction, individuals of the next generation become the data, and communication between each station is made again. The simulation results of 1:1, 1:2 array antennas, and experiment results of 1:1 array antenna confirmed the efficiency of the proposed method. The bit of gene is each 8bit, 16bit and 16bit split gene. 16bit split has similar performance as 16bit gene, but the gene of antenna is 8bit.

  10. Tydskrif vir letterkunde - Vol 41, No 2 (2004)

    African Journals Online (AJOL)

    Masculinity and Nationalism in East African Hip-Hop Music · EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. Ewan Mwangi, 5-20. http://dx.doi.org/10.4314/tvl.v41i2.29671 ...

  11. J(l)-unitary factorization and the Schur algorithm for Nevanlinna functions in an indefinite setting

    NARCIS (Netherlands)

    Alpay, D.; Dijksma, A.; Langer, H.

    2006-01-01

    We introduce a Schur transformation for generalized Nevanlinna functions and show that it can be used in obtaining the unique minimal factorization of a class of rational J(l)-unitary 2 x 2 matrix functions into elementary factors from the same class. (c) 2006 Elsevier Inc. All rights reserved.

  12. Inducible targeting of CNS astrocytes in Aldh1l1-CreERT2 BAC transgenic mice.

    Science.gov (United States)

    Winchenbach, Jan; Düking, Tim; Berghoff, Stefan A; Stumpf, Sina K; Hülsmann, Swen; Nave, Klaus-Armin; Saher, Gesine

    2016-01-01

    Background: Studying astrocytes in higher brain functions has been hampered by the lack of genetic tools for the efficient expression of inducible Cre recombinase throughout the CNS, including the neocortex. Methods: Therefore, we generated BAC transgenic mice, in which CreERT2 is expressed under control of the Aldh1l1 regulatory region. Results: When crossbred to Cre reporter mice, adult Aldh1l1-CreERT2 mice show efficient gene targeting in astrocytes. No such Cre-mediated recombination was detectable in CNS neurons, oligodendrocytes, and microglia. As expected, Aldh1l1-CreERT2 expression was evident in several peripheral organs, including liver and kidney. Conclusions: Taken together, Aldh1l1-CreERT2 mice are a useful tool for studying astrocytes in neurovascular coupling, brain metabolism, synaptic plasticity and other aspects of neuron-glia interactions.

  13. Deceased-Donor Apolipoprotein L1 Renal-Risk Variants Have Minimal Effects on Liver Transplant Outcomes.

    Directory of Open Access Journals (Sweden)

    Casey R Dorr

    Full Text Available Apolipoprotein L1 gene (APOL1 G1 and G2 renal-risk variants, common in populations with recent African ancestry, are strongly associated with non-diabetic nephropathy, end-stage kidney disease, and shorter allograft survival in deceased-donor kidneys (autosomal recessive inheritance. Circulating APOL1 protein is synthesized primarily in the liver and hydrodynamic gene delivery of APOL1 G1 and G2 risk variants has caused hepatic necrosis in a murine model.To evaluate the impact of these variants in liver transplantation, this multicenter study investigated the association of APOL1 G1 and G2 alleles in deceased African American liver donors with allograft survival. Transplant recipients were followed for liver allograft survival using data from the Scientific Registry of Transplant Recipients.Of the 639 liver donors evaluated, 247 had no APOL1 risk allele, 300 had 1 risk allele, and 92 had 2 risk alleles. Graft failure assessed at 15 days, 6 months, 1 year and total was not significantly associated with donor APOL1 genotype (p-values = 0.25, 0.19, 0.67 and 0.89, respectively.In contrast to kidney transplantation, deceased-donor APOL1 G1 and G2 risk variants do not significantly impact outcomes in liver transplantation.

  14. Morphological Family Size effects in L1 and L2 processing: An electrophysiological study

    NARCIS (Netherlands)

    Mulder, K.; Schreuder, R.; Dijkstra, A.F.J.

    2013-01-01

    The present study examined Morphological Family Size effects in first and second language processing. Items with a high or low Dutch (L1) Family Size were contrasted in four experiments involving Dutch–English bilinguals. In two experiments, reaction times (RTs) were collected in English (L2) and

  15. Input frequencies in processing of verbal morphology in L1 and L2: Evidence from Russian

    Directory of Open Access Journals (Sweden)

    Tatiana Chernigovskaya

    2011-02-01

    Full Text Available In this study we take a usage-based perspective on the analysis of data from the acquisition of verbal morphology by Norwegian adult learners of L2 Russian, as compared to children acquiring Russian as an L1. According to the usage-based theories, language learning is input-driven and frequency of occurrence of grammatical structures and lexical items in the input plays a key role in this process. We have analysed to what extent the acquisition and processing of Russian verbal morphology by children and adult L2 learners is dependent on the input factors, in particular on type and token frequencies. Our analysis of the L2 input based on the written material used in the instruction shows a different distribution of frequencies as compared to the target language at large. The results of the tests that elicited present tense forms of verbs belonging to four different inflectional classes (-AJ-, -A-, -I-, and -OVA- have demonstrated that for both Russian children and L2 learners type frequency appears to be an important factor, influencing both correct stem recognition and generalisations. The results have also demonstrated token frequency effects. For L2 learners we observed also effects of formal instruction and greater reliance on morphological cues. In spite of the fact that L2 learners did not match completely any of the child groups, there are many similarities between L1 and L2 morphological processing, the main one being the role of frequency.

  16. VizieR Online Data Catalog: L1157-B1 DCN (2-1) and H13CN (2-1) datacubes (Busquet+,

    Science.gov (United States)

    Busquet, G.; Fontani, F.; Viti, S.; Codella, C.; Lefloch, B.; Benedettini, M.; Ceccarellli, C.

    2017-06-01

    IRAM NOEMA observations of DCN(2-1) and H13CN(2-1) towa brightest bow-shock B1 of the L1157 molecular outflow. All data cubes are provided in fits format smoothed to a velocity resolution of 0.5km/s. (2 data files).

  17. An Enhanced Discrete Artificial Bee Colony Algorithm to Minimize the Total Flow Time in Permutation Flow Shop Scheduling with Limited Buffers

    Directory of Open Access Journals (Sweden)

    Guanlong Deng

    2016-01-01

    Full Text Available This paper presents an enhanced discrete artificial bee colony algorithm for minimizing the total flow time in the flow shop scheduling problem with buffer capacity. First, the solution in the algorithm is represented as discrete job permutation to directly convert to active schedule. Then, we present a simple and effective scheme called best insertion for the employed bee and onlooker bee and introduce a combined local search exploring both insertion and swap neighborhood. To validate the performance of the presented algorithm, a computational campaign is carried out on the Taillard benchmark instances, and computations and comparisons show that the proposed algorithm is not only capable of solving the benchmark set better than the existing discrete differential evolution algorithm and iterated greedy algorithm, but also capable of performing better than two recently proposed discrete artificial bee colony algorithms.

  18. Sensitivity to TOP2 targeting chemotherapeutics is regulated by Oct1 and FILIP1L.

    Directory of Open Access Journals (Sweden)

    Huarui Lu

    Full Text Available Topoisomerase II (TOP2 targeting drugs like doxorubicin and etoposide are frontline chemotherapeutics for a wide variety of solid and hematological malignancies, including breast and ovarian adenocarcinomas, lung cancers, soft tissue sarcomas, leukemias and lymphomas. These agents cause a block in DNA replication leading to a pronounced DNA damage response and initiation of apoptotic programs. Resistance to these agents is common, however, and elucidation of the mechanisms causing resistance to therapy could shed light on strategies to reduce the frequency of ineffective treatments. To explore these mechanisms, we utilized an unbiased shRNA screen to identify genes that regulate cell death in response to doxorubicin treatment. We identified the Filamin A interacting protein 1-like (FILIP1L gene as a crucial mediator of apoptosis triggered by doxorubicin. FILIP1L shares significant similarity with bacterial SbcC, an ATPase involved in DNA repair. FILIP1L was originally described as DOC1, or "down-regulated in ovarian cancer" and has since been shown to be downregulated in a wide variety of human tumors. FILIP1L levels increase markedly through transcriptional mechanisms following treatment with doxorubicin and other TOP2 poisons, including etoposide and mitoxantrone, but not by the TOP2 catalytic inhibitors merbarone or dexrazoxane (ICRF187, or by UV irradiation. This induction requires the action of the OCT1 transcription factor, which relocalizes to the FILIP1L promoter and facilitates its expression following doxorubicin treatment. Our findings suggest that the FILIP1L expression status in tumors may influence the response to anti-TOP2 chemotherapeutics.

  19. Targeting immune co-stimulatory effects of PD-L1 and PD-L2 might represent an effective therapeutic strategy in stroke

    Directory of Open Access Journals (Sweden)

    Sheetal eBodhankar

    2014-08-01

    Full Text Available Stroke outcome is worsened by the infiltration of inflammatory immune cells into ischemic brains. Our recent study demonstrated that PD-L1- and to a lesser extent PD-L2-deficient mice had smaller brain infarcts and fewer brain-infiltrating cells vs. WT mice, suggesting a pathogenic role for PD-Ligands in experimental stroke. We sought to ascertain PD-L1 and PD-L2-expressing cell types that affect T-cell activation, post-stroke in the context of other known co-stimulatory molecules. Thus, cells from male WT and PD-L-deficient mice undergoing 60 min of middle cerebral artery occlusion (MCAO followed by 96h of reperfusion were treated with neutralizing antibodies to study co-stimulatory and co-inhibitory interactions between CD80, CTLA-4, PD-1 and PD-Ls that regulate CD8+ and CD4+ T-cell activation. We found that antibody neutralization of PD-1 and CTLA-4 signaling post-MCAO resulted in higher proliferation in WT CD8+ and CD4+ T-cells, confirming an inhibitory role of PD-1 and CTLA-4 on T-cell activation. Also, CD80/CD28 interactions played a prominent regulatory role for the CD8+ T-cells and the PD-1/PD-L2 interactions were dominant in controlling the CD4+ T-cell responses in WT mice after stroke. A suppressive phenotype in PD-L1-deficient mice was attributed to CD80/CTLA-4 and PD-1/PD-L2 interactions. PD-L2 was crucial in modulating CD4+ T-cell responses, whereas PD-L1 regulated both CD8+ and CD4+ T-cells. To establish the contribution of PD-L1 and PD-L2 on regulatory B-cells (Bregs, infarct volumes were evaluated in male PD-L1- and PD-L2-deficient mice receiving IL-10+ B-cells 4h post-MCAO. PD-L2- but not PD-L1-deficient recipients of IL-10+ B-cells had markedly reduced infarct volumes, indicating a regulatory role of PD-L2 on Bregs. These results imply that PD-L1 and PD-L2 differentially control induction of T- and Breg-cell responses after MCAO, thus suggesting that selective targeting of PD-L1 and PD-L2 might represent a valuable therapeutic

  20. Cryo-EM structure of the polycystic kidney disease-like channel PKD2L1.

    Science.gov (United States)

    Su, Qiang; Hu, Feizhuo; Liu, Yuxia; Ge, Xiaofei; Mei, Changlin; Yu, Shengqiang; Shen, Aiwen; Zhou, Qiang; Yan, Chuangye; Lei, Jianlin; Zhang, Yanqing; Liu, Xiaodong; Wang, Tingliang

    2018-03-22

    PKD2L1, also termed TRPP3 from the TRPP subfamily (polycystic TRP channels), is involved in the sour sensation and other pH-dependent processes. PKD2L1 is believed to be a nonselective cation channel that can be regulated by voltage, protons, and calcium. Despite its considerable importance, the molecular mechanisms underlying PKD2L1 regulations are largely unknown. Here, we determine the PKD2L1 atomic structure at 3.38 Å resolution by cryo-electron microscopy, whereby side chains of nearly all residues are assigned. Unlike its ortholog PKD2, the pore helix (PH) and transmembrane segment 6 (S6) of PKD2L1, which are involved in upper and lower-gate opening, adopt an open conformation. Structural comparisons of PKD2L1 with a PKD2-based homologous model indicate that the pore domain dilation is coupled to conformational changes of voltage-sensing domains (VSDs) via a series of π-π interactions, suggesting a potential PKD2L1 gating mechanism.

  1. The minimal SUSY B−L model: simultaneous Wilson lines and string thresholds

    Energy Technology Data Exchange (ETDEWEB)

    Deen, Rehan; Ovrut, Burt A. [Department of Physics, University of Pennsylvania,209 South 33rd Street, Philadelphia, PA 19104-6396 (United States); Purves, Austin [Department of Physics, University of Pennsylvania,209 South 33rd Street, Philadelphia, PA 19104-6396 (United States); Department of Physics, Manhattanville College,2900 Purchase Street, Purchase, NY 10577 (United States)

    2016-07-08

    In previous work, we presented a statistical scan over the soft supersymmetry breaking parameters of the minimal SUSY B−L model. For specificity of calculation, unification of the gauge parameters was enforced by allowing the two ℤ{sub 3}×ℤ{sub 3} Wilson lines to have mass scales separated by approximately an order of magnitude. This introduced an additional “left-right” sector below the unification scale. In this paper, for three important reasons, we modify our previous analysis by demanding that the mass scales of the two Wilson lines be simultaneous and equal to an “average unification” mass 〈M{sub U}〉. The present analysis is 1) more “natural” than the previous calculations, which were only valid in a very specific region of the Calabi-Yau moduli space, 2) the theory is conceptually simpler in that the left-right sector has been removed and 3) in the present analysis the lack of gauge unification is due to threshold effects — particularly heavy string thresholds, which we calculate statistically in detail. As in our previous work, the theory is renormalization group evolved from 〈M{sub U}〉 to the electroweak scale — being subjected, sequentially, to the requirement of radiative B−L and electroweak symmetry breaking, the present experimental lower bounds on the B−L vector boson and sparticle masses, as well as the lightest neutral Higgs mass of ∼125 GeV. The subspace of soft supersymmetry breaking masses that satisfies all such constraints is presented and shown to be substantial.

  2. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00372074; The ATLAS collaboration; Sotiropoulou, Calliope Louisa; Annovi, Alberto; Kordas, Kostantinos

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...

  3. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    Gkaitatzis, Stamatios; The ATLAS collaboration

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...

  4. An information geometric approach to least squares minimization

    Science.gov (United States)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  5. Turbulence, raindrops and the l{sup 1/2} number density law

    Energy Technology Data Exchange (ETDEWEB)

    Lovejoy, S [Department of Physics, McGill University, 3600 University street, Montreal, Quebec, H3A 2T8 (Canada); Schertzer, D [Universite Paris-Est, ENPC/CEREVE, 77455 Marne-la-Vallee Cedex 2 (France)], E-mail: lovejoy@physics.mcgill.ca

    2008-07-15

    Using a unique data set of three-dimensional drop positions and masses (the HYDROP experiment), we show that the distribution of liquid water in rain displays a sharp transition between large scales which follow a passive scalar-like Corrsin-Obukhov (k{sup -5/3}) spectrum and a small-scale statistically homogeneous white noise regime. We argue that the transition scale l{sub c} is the critical scale where the mean Stokes number (= drop inertial time/turbulent eddy time) St{sub l} is unity. For five storms, we found l{sub c} in the range 45-75 cm with the corresponding dissipation scale St{sub {eta}} in the range 200-300. Since the mean interdrop distance was significantly smaller ({approx} 10 cm) than l{sub c} we infer that rain consists of 'patches' whose mean liquid water content is determined by turbulence with each patch being statistically homogeneous. For l>l{sub c}, we have St{sub l}<1 and due to the observed statistical homogeneity for l{sub c}, we argue that we can use Maxey's relations between drop and wind velocities at coarse grained resolution l{sub c}. From this, we derive equations for the number and mass densities (n and {rho}) and their variance fluxes ({psi} and {chi}). By showing that {chi} is dissipated at small scales (with l{sub {rho}}{sub ,diss}{approx}l{sub c}) and {psi} over a wide range, we conclude that {rho} should indeed follow Corrsin-Obukhov k{sup -5/3} spectra but that n should instead follow a k{sup -2} spectrum corresponding to fluctuations scaling as {delta}{rho}{approx}l{sup 1/3} and {delta}n{approx}l{sup 1/2}. While the Corrsin-Obukhov law has never been observed in rain before, its discovery is perhaps not surprising; in contrast the {delta}n{approx}l{sup 1/2} number density law is quite new. The key difference between the {delta}{rho}, {delta}n laws is the fact that the microphysics (coalescence, breakup) conserves drop mass, but not numbers of particles. This implies that the timescale for the transfer of the

  6. Improved L-BFGS diagonal preconditioners for a large-scale 4D-Var inversion system: application to CO2 flux constraints and analysis error calculation

    Science.gov (United States)

    Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng

    2013-04-01

    This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a

  7. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    Science.gov (United States)

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  8. A scattering-based over-land rainfall retrieval algorithm for South Korea using GCOM-W1/AMSR-2 data

    Science.gov (United States)

    Kwon, Young-Joo; Shin, Hayan; Ban, Hyunju; Lee, Yang-Won; Park, Kyung-Ae; Cho, Jaeil; Park, No-Wook; Hong, Sungwook

    2017-08-01

    Heavy summer rainfall is a primary natural disaster affecting lives and properties in the Korean Peninsula. This study presents a satellite-based rainfall rate retrieval algorithm for the South Korea combining polarization-corrected temperature ( PCT) and scattering index ( SI) data from the 36.5 and 89.0 GHz channels of the Advanced microwave Scanning Radiometer 2 (AMSR-2) onboard the Global Change Observation Mission (GCOM)-W1 satellite. The coefficients for the algorithm were obtained from spatial and temporal collocation data from the AMSR-2 and groundbased automatic weather station rain gauges from 1 July - 30 August during the years, 2012-2015. There were time delays of about 25 minutes between the AMSR-2 observations and the ground raingauge measurements. A new linearly-combined rainfall retrieval algorithm focused on heavy rain for the PCT and SI was validated using ground-based rainfall observations for the South Korea from 1 July - 30 August, 2016. The validation presented PCT and SI methods showed slightly improved results for rainfall > 5 mm h-1 compared to the current ASMR-2 level 2 data. The best bias and root mean square error (RMSE) for the PCT method at AMSR-2 36.5 GHz were 2.09 mm h-1 and 7.29 mm h-1, respectively, while the current official AMSR-2 rainfall rates show a larger bias and RMSE (4.80 mm h-1 and 9.35 mm h-1, respectively). This study provides a scatteringbased over-land rainfall retrieval algorithm for South Korea affected by stationary front rain and typhoons with the advantages of the previous PCT and SI methods to be applied to a variety of spaceborne passive microwave radiometers.

  9. Design and Analysis of Optimization Algorithms to Minimize Cryptographic Processing in BGP Security Protocols.

    Science.gov (United States)

    Sriram, Vinay K; Montgomery, Doug

    2017-07-01

    The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.

  10. L1 track finding for a time multiplexed trigger

    Energy Technology Data Exchange (ETDEWEB)

    Cieri, D., E-mail: davide.cieri@bristol.ac.uk [University of Bristol, Bristol (United Kingdom); Rutherford Appleton Laboratory, Didcot (United Kingdom); Brooke, J.; Grimes, M. [University of Bristol, Bristol (United Kingdom); Newbold, D. [University of Bristol, Bristol (United Kingdom); Rutherford Appleton Laboratory, Didcot (United Kingdom); Harder, K.; Shepherd-Themistocleous, C.; Tomalin, I. [Rutherford Appleton Laboratory, Didcot (United Kingdom); Vichoudis, P. [CERN, Geneva (Switzerland); Reid, I. [Brunel University, London (United Kingdom); Iles, G.; Hall, G.; James, T.; Pesaresi, M.; Rose, A.; Tapper, A.; Uchida, K. [Imperial College, London (United Kingdom)

    2016-07-11

    At the HL-LHC, proton bunches will cross each other every 25 ns, producing an average of 140 pp-collisions per bunch crossing. To operate in such an environment, the CMS experiment will need a L1 hardware trigger able to identify interesting events within a latency of 12.5 μs. The future L1 trigger will make use also of data coming from the silicon tracker to control the trigger rate. The architecture that will be used in future to process tracker data is still under discussion. One interesting proposal makes use of the Time Multiplexed Trigger concept, already implemented in the CMS calorimeter trigger for the Phase I trigger upgrade. The proposed track finding algorithm is based on the Hough Transform method. The algorithm has been tested using simulated pp-collision data. Results show a very good tracking efficiency. The algorithm will be demonstrated in hardware in the coming months using the MP7, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s.

  11. L1 Track Finding for a Time Multiplexed Trigger

    CERN Document Server

    AUTHOR|(CDS)2090481; Grimes, M.; Newbold, D.; Harder, K.; Shepherd-Themistocleous, C.; Tomalin, I.; Vichoudis, P.; Reid, I.; Iles, G.; Hall, G.; James, T.; Pesaresi, M.; Rose, A.; Tapper, A.; Uchida, K.

    2016-01-01

    At the HL-LHC, proton bunches will cross each other every 25 ns, producing an average of 140 p p-collisions per bunch crossing. To operate in such an environment, the CMS experiment will need a L1 hardware trigger able to identify interesting events within a latency of 12.5 us. The future L1 trigger will make use also of data coming from the silicon tracker to control the trigger rate. The architecture that will be used in future to process tracker data is still under discussion. One interesting proposal makes use of the Time Multiplexed Trigger concept, already implemented in the CMS calorimeter trigger for the Phase I trigger upgrade. The proposed track finding algorithm is based on the Hough Transform method. The algorithm has been tested using simulated pp-collision data. Results show a very good tracking efficiency. The algorithm will be demonstrated in hardware in the coming months using the MP7, which is a uTCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s.

  12. Surpassing the Theoretical 1-Norm Phase Transition in Compressive Sensing by Tuning the Smoothed L0 Algorithm

    DEFF Research Database (Denmark)

    Oxvig, Christian Schou; Pedersen, Patrick Steffen; Arildsen, Thomas

    2013-01-01

    Reconstruction of an undersampled signal is at the root of compressive sensing: when is an algorithm capable of reconstructing the signal? what quality is achievable? and how much time does reconstruction require? We have considered the worst-case performance of the smoothed ℓ0 norm reconstruction...... algorithm in a noiseless setup. Through an empirical tuning of its parameters, we have improved the phase transition (capabilities) of the algorithm for fixed quality and required time. In this paper, we present simulation results that show a phase transition surpassing that of the theoretical ℓ1 approach......: the proposed modified algorithm obtains 1-norm phase transition with greatly reduced required computation time....

  13. Comparing Hypertext Reading in L1 and L2: The Case of Filipino Adults

    Science.gov (United States)

    Gruspe, Michael Angelo M.; Marinas, Christian Joshua L.; Villasin, Marren Nicole F.; Villanueva, Ariel Josephe Therese R.; Vizconde, Camilla J.

    2015-01-01

    This research probed into the reading experiences of adult readers in their first language (L1) and second language (L2). Qualitative in nature, the investigation focused on twelve (12) adult readers , six (6) males and six (6) females, whose first language is Filipino. Data were gathered through interviews and focus-group discussions. Based on…

  14. The Surface Extraction from TIN based Search-space Minimization (SETSM) algorithm

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2017-07-01

    Digital Elevation Models (DEMs) provide critical information for a wide range of scientific, navigational and engineering activities. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible for generating stereo-photogrammetric DEMs. However, low contrast and repeatedly-textured surfaces, such as snow and glacial ice at high latitudes, and mountainous terrains challenge existing stereo-photogrammetric DEM generation techniques, particularly without a-priori information such as existing seed DEMs or the manual setting of terrain-specific parameters. To utilize these data for fully-automatic DEM extraction at a large scale, we developed the Surface Extraction from TIN-based Search-space Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the sensor model Rational Polynomial Coefficients (RPCs). SETSM adopts a hierarchical, combined image- and object-space matching strategy utilizing weighted normalized cross-correlation with both original distorted and geometrically corrected images for overcoming ambiguities caused by foreshortening and occlusions. In addition, SETSM optimally minimizes search-spaces to extract optimal matches over problematic terrains by iteratively updating object surfaces within a Triangulated Irregular Network, and utilizes a geometric-constrained blunder and outlier detection in object space. We prove the ability of SETSM to mitigate typical stereo-photogrammetric matching problems over a range of challenging terrains. SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM project.

  15. Validation of ATLAS L1 Topological Triggers

    CERN Document Server

    Praderio, Marco

    2017-01-01

    The Topological trigger (L1Topo) is a new component of the ATLAS L1 (Level-1) trigger. Its purpose is that of reducing the otherwise too high rate of data collection from the LHC by rejecting those events considered “uninteresting” (meaning that they have already been studied). This event rate reduction is achieved by applying topological requirements to the physical objects present in each event. It is very important to make sure that this trigger does not reject any “interesting” event. Therefore we need to verify its correct functioning. The goal of this summer student project is to study the response of two L1Topo algorithms (concerning ∆R and invariant mass). To do so I will compare the trigger decisions produced by the L1Topo hardware with the ones produced by the “official” L1Topo simulation. This way I will be able to identify events that could be incorrectly rejected. Simultaneously I will produce an emulation of these triggers that will help me understand the cause of disagreements bet...

  16. An optimal algorithm for preemptive on-line scheduling

    NARCIS (Netherlands)

    Chen, B.; Vliet, van A.; Woeginger, G.J.

    1995-01-01

    We investigate the problem of on-line scheduling jobs on m identical parallel machines where preemption is allowed. The goal is to minimize the makespan. We derive an approximation algorithm with worst-case guarantee mm/(mm - (m - 1)m) for every m 2, which increasingly tends to e/(e - 1) ˜ 1.58 as m

  17. Order Batching in Warehouses by Minimizing Total Tardiness: A Hybrid Approach of Weighted Association Rule Mining and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Amir Hossein Azadnia

    2013-01-01

    Full Text Available One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.

  18. Quantitative and Qualitative Aspects of L1 (Swedish) and L2 (English) Idiom Comprehension

    Science.gov (United States)

    Karlsson, Monica

    2013-01-01

    In the present investigation, 15 first term university students were faced with 80 context-based idioms in English (L2) and Swedish (L1) respectively, 30 of which were in the source domain of animals, commonly used in both languages, and asked to explain their meaning. The idioms were of varying frequency and transparency. Three main research…

  19. Controller synthesis for L2 behaviors using rational kernel representations

    NARCIS (Netherlands)

    Mutsaers, M.E.C.; Weiland, S.

    2008-01-01

    This paper considers the controller synthesis problem for the class of linear time-invariant L2 behaviors. We introduce classes of LTI L2 systems whose behavior can be represented as the kernel of a rational operator. Given a plant and a controlled system in this class, an algorithm is developed

  20. Synthesis, structure, luminescent, and magnetic properties of carbonato-bridged Zn(II)2Ln(III)2 complexes [(μ4-CO3)2{Zn(II)L(n)Ln(III)(NO3)}2] (Ln(III) = Gd(III), Tb(III), Dy(III); L(1) = N,N'-bis(3-methoxy-2-oxybenzylidene)-1,3-propanediaminato, L(2) = N,N'-bis(3-ethoxy-2-oxybenzylidene)-1,3-propanediaminato).

    Science.gov (United States)

    Ehama, Kiyomi; Ohmichi, Yusuke; Sakamoto, Soichiro; Fujinami, Takeshi; Matsumoto, Naohide; Mochida, Naotaka; Ishida, Takayuki; Sunatsuki, Yukinari; Tsuchimoto, Masanobu; Re, Nazzareno

    2013-11-04

    Carbonato-bridged Zn(II)2Ln(III)2 complexes [(μ4-CO3)2{Zn(II)L(n)Ln(III)(NO3)}2]·solvent were synthesized through atmospheric CO2 fixation reaction of [Zn(II)L(n)(H2O)2]·xH2O, Ln(III)(NO3)3·6H2O, and triethylamine, where Ln(III) = Gd(III), Tb(III), Dy(III); L(1) = N,N'-bis(3-methoxy-2-oxybenzylidene)-1,3-propanediaminato, L(2) = N,N'-bis(3-ethoxy-2-oxybenzylidene)-1,3-propanediaminato. Each Zn(II)2Ln(III)2 structure possessing an inversion center can be described as two di-μ-phenoxo-bridged {Zn(II)L(n)Ln(III)(NO3)} binuclear units bridged by two carbonato CO3(2-) ions. The Zn(II) ion has square pyramidal coordination geometry with N2O2 donor atoms of L(n) and one oxygen atom of a bridging carbonato ion at the axial site. Ln(III) ion is coordinated by nine oxygen atoms consisting of four from the deprotonated Schiff-base L(n), two from a chelating nitrate, and three from two carbonate groups. The temperature-dependent magnetic susceptibilities in the range 1.9-300 K, field-dependent magnetization from 0 to 5 T at 1.9 K, and alternating current magnetic susceptibilities under the direct current bias fields of 0 and 1000 Oe were measured. The magnetic properties of the Zn(II)2Ln(III)2 complexes are analyzed on the basis of the dicarbonato-bridged binuclear Ln(III)-Ln(III) structure, as the Zn(II) ion with d(10) electronic configuration is diamagnetic. ZnGd1 (L(1)) and ZnGd2 (L(2)) show a ferromagnetic Gd(III)-Gd(III) interaction with J(Gd-Gd) = +0.042 and +0.028 cm(-1), respectively, on the basis of the Hamiltonian H = -2J(Gd-Gd)ŜGd1·ŜGd2. The magnetic data of the Zn(II)2Ln(III)2 complexes (Ln(III) = Tb(III), Dy(III)) were analyzed by a spin Hamiltonian including the crystal field effect on the Ln(III) ions and the Ln(III)-Ln(III) magnetic interaction. The Stark splitting of the ground state was so evaluated, and the energy pattern indicates a strong easy axis (Ising type) anisotropy. Luminescence spectra of Zn(II)2Tb(III)2 complexes were observed, while those

  1. Parity violations in electron-nucleon scattering and the SU(2)sub(L)xSU(2)sub(R)xU(1)sub(L+R) electroweak symmetry

    International Nuclear Information System (INIS)

    Rajpoot, S.

    1981-07-01

    The SU(2)sub(L) x SU(2)sub(R) x U(1)sub(L+R) model of electroweak interactions is described with the most general gauge couplings gsub(L), gsub(R) and gsub(L+R). The case in which neutrino neutral current interactions are identical to the standard SU(2)sub(L) x U(1)sub(L+R) model is discussed in detail. It is shown that with the weak angle lying in the experimental range sin 2 thetaSUB(w)=0.23+-0.015 and 1 2 /gsub(R) 2 <3 it is possible to explain the amount of parity violation observed at SLAC and at the same time predict values of the ''weak charge'' in bismuth to lie in the range admitted by the controversal data from different experiments. (author)

  2. Path covering number and L(2,1)-labeling number of graphs

    OpenAIRE

    Lu, Changhong; Zhou, Qing

    2012-01-01

    A {\\it path covering} of a graph $G$ is a set of vertex disjoint paths of $G$ containing all the vertices of $G$. The {\\it path covering number} of $G$, denoted by $P(G)$, is the minimum number of paths in a path covering of $G$. An {\\sl $k$-L(2,1)-labeling} of a graph $G$ is a mapping $f$ from $V(G)$ to the set ${0,1,...,k}$ such that $|f(u)-f(v)|\\ge 2$ if $d_G(u,v)=1$ and $|f(u)-f(v)|\\ge 1$ if $d_G(u,v)=2$. The {\\sl L(2,1)-labeling number $\\lambda (G)$} of $G$ is the smallest number $k$ suc...

  3. An advanced algorithm for deformation estimation in non-urban areas

    Science.gov (United States)

    Goel, Kanika; Adam, Nico

    2012-09-01

    This paper presents an advanced differential SAR interferometry stacking algorithm for high resolution deformation monitoring in non-urban areas with a focus on distributed scatterers (DSs). Techniques such as the Small Baseline Subset Algorithm (SBAS) have been proposed for processing DSs. SBAS makes use of small baseline differential interferogram subsets. Singular value decomposition (SVD), i.e. L2 norm minimization is applied to link independent subsets separated by large baselines. However, the interferograms used in SBAS are multilooked using a rectangular window to reduce phase noise caused for instance by temporal decorrelation, resulting in a loss of resolution and the superposition of topography and deformation signals from different objects. Moreover, these have to be individually phase unwrapped and this can be especially difficult in natural terrains. An improved deformation estimation technique is presented here which exploits high resolution SAR data and is suitable for rural areas. The implemented method makes use of small baseline differential interferograms and incorporates an object adaptive spatial phase filtering and residual topography removal for an accurate phase and coherence estimation, while preserving the high resolution provided by modern satellites. This is followed by retrieval of deformation via the SBAS approach, wherein, the phase inversion is performed using an L1 norm minimization which is more robust to the typical phase unwrapping errors encountered in non-urban areas. Meter resolution TerraSAR-X data of an underground gas storage reservoir in Germany is used for demonstrating the effectiveness of this newly developed technique in rural areas.

  4. Tydskrif vir letterkunde - Vol 52, No 2 (2015)

    African Journals Online (AJOL)

    Negotiating growth in turbulentscapes: Violence, secrecy and growth in Goretti Kyomuhendo's Secrets No More · EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. O Okuyade, 117-137. http://dx.doi.org/10.4314/tvl.v52i2.9 ...

  5. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    International Nuclear Information System (INIS)

    Laurence, T.; Chromy, B.

    2010-01-01

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  6. BALL - biochemical algorithms library 1.3

    Directory of Open Access Journals (Sweden)

    Stöckel Daniel

    2010-10-01

    Full Text Available Abstract Background The Biochemical Algorithms Library (BALL is a comprehensive rapid application development framework for structural bioinformatics. It provides an extensive C++ class library of data structures and algorithms for molecular modeling and structural bioinformatics. Using BALL as a programming toolbox does not only allow to greatly reduce application development times but also helps in ensuring stability and correctness by avoiding the error-prone reimplementation of complex algorithms and replacing them with calls into the library that has been well-tested by a large number of developers. In the ten years since its original publication, BALL has seen a substantial increase in functionality and numerous other improvements. Results Here, we discuss BALL's current functionality and highlight the key additions and improvements: support for additional file formats, molecular edit-functionality, new molecular mechanics force fields, novel energy minimization techniques, docking algorithms, and support for cheminformatics. Conclusions BALL is available for all major operating systems, including Linux, Windows, and MacOS X. It is available free of charge under the Lesser GNU Public License (LPGL. Parts of the code are distributed under the GNU Public License (GPL. BALL is available as source code and binary packages from the project web site at http://www.ball-project.org. Recently, it has been accepted into the debian project; integration into further distributions is currently pursued.

  7. Production of infectious chimeric hepatitis C virus genotype 2b harboring minimal regions of JFH-1.

    Science.gov (United States)

    Murayama, Asako; Kato, Takanobu; Akazawa, Daisuke; Sugiyama, Nao; Date, Tomoko; Masaki, Takahiro; Nakamoto, Shingo; Tanaka, Yasuhito; Mizokami, Masashi; Yokosuka, Osamu; Nomoto, Akio; Wakita, Takaji

    2012-02-01

    To establish a cell culture system for chimeric hepatitis C virus (HCV) genotype 2b, we prepared a chimeric construct harboring the 5' untranslated region (UTR) to the E2 region of the MA strain (genotype 2b) and the region of p7 to the 3' UTR of the JFH-1 strain (genotype 2a). This chimeric RNA (MA/JFH-1.1) replicated and produced infectious virus in Huh7.5.1 cells. Replacement of the 5' UTR of this chimera with that from JFH-1 (MA/JFH-1.2) enhanced virus production, but infectivity remained low. In a long-term follow-up study, we identified a cell culture-adaptive mutation in the core region (R167G) and found that it enhanced virus assembly. We previously reported that the NS3 helicase (N3H) and the region of NS5B to 3' X (N5BX) of JFH-1 enabled replication of the J6CF strain (genotype 2a), which could not replicate in cells. To reduce JFH-1 content in MA/JFH-1.2, we produced a chimeric viral genome for MA harboring the N3H and N5BX regions of JFH-1, combined with a JFH-1 5' UTR replacement and the R167G mutation (MA/N3H+N5BX-JFH1/R167G). This chimeric RNA replicated efficiently, but virus production was low. After the introduction of four additional cell culture-adaptive mutations, MA/N3H+N5BX-JFH1/5am produced infectious virus efficiently. Using this chimeric virus harboring minimal regions of JFH-1, we analyzed interferon sensitivity and found that this chimeric virus was more sensitive to interferon than JFH-1 and another chimeric virus containing more regions from JFH-1 (MA/JFH-1.2/R167G). In conclusion, we established an HCV genotype 2b cell culture system using a chimeric genome harboring minimal regions of JFH-1. This cell culture system may be useful for characterizing genotype 2b viruses and developing antiviral strategies.

  8. Evolution of online algorithms in ATLAS and CMS in Run2

    CERN Document Server

    Tomei, Thiago R. F. P.

    2017-01-01

    The Large Hadron Collider has entered a new era in Run~2, with centre-of-mass energy of 13~TeV and instantaneous luminosity reaching $\\mathcal{L}_\\textrm{inst} = 1.4\\times$10\\textsuperscript{34}~cm\\textsuperscript{-2}~s\\textsuperscript{-1} for pp collisions. In order to cope with those harsher conditions, the ATLAS and CMS collaborations have improved their online selection infrastructure to keep a high efficiency for important physics processes -- like W, Z and Higgs bosons in their leptonic and diphoton modes -- whilst keeping the size of data stream compatible with the bandwidth and disk resources available. In this note, we describe some of the trigger improvements implemented for Run~2, including algorithms for selection of electrons, photons, muons and hadronic final states.

  9. Image segmentation algorithm based on T-junctions cues

    Science.gov (United States)

    Qian, Yanyu; Cao, Fengyun; Wang, Lu; Yang, Xuejie

    2016-03-01

    To improve the over-segmentation and over-merge phenomenon of single image segmentation algorithm,a novel approach of combing Graph-Based algorithm and T-junctions cues is proposed in this paper. First, a method by L0 gradient minimization is applied to the smoothing of the target image eliminate artifacts caused by noise and texture detail; Then, the initial over-segmentation result of the smoothing image using the graph-based algorithm; Finally, the final results via a region fusion strategy by t-junction cues. Experimental results on a variety of images verify the new approach's efficiency in eliminating artifacts caused by noise,segmentation accuracy and time complexity has been significantly improved.

  10. Cyclophilins facilitate dissociation of the human papillomavirus type 16 capsid protein L1 from the L2/DNA complex following virus entry.

    Science.gov (United States)

    Bienkowska-Haba, Malgorzata; Williams, Carlyn; Kim, Seong Man; Garcea, Robert L; Sapp, Martin

    2012-09-01

    Human papillomaviruses (HPV) are composed of the major and minor capsid proteins, L1 and L2, that encapsidate a chromatinized, circular double-stranded DNA genome. At the outset of infection, the interaction of HPV type 16 (HPV16) (pseudo)virions with heparan sulfate proteoglycans triggers a conformational change in L2 that is facilitated by the host cell chaperone cyclophilin B (CyPB). This conformational change results in exposure of the L2 N terminus, which is required for infectious internalization. Following internalization, L2 facilitates egress of the viral genome from acidified endosomes, and the L2/DNA complex accumulates at PML nuclear bodies. We recently described a mutant virus that bypasses the requirement for cell surface CyPB but remains sensitive to cyclosporine for infection, indicating an additional role for CyP following endocytic uptake of virions. We now report that the L1 protein dissociates from the L2/DNA complex following infectious internalization. Inhibition and small interfering RNA (siRNA)-mediated knockdown of CyPs blocked dissociation of L1 from the L2/DNA complex. In vitro, purified CyPs facilitated the dissociation of L1 pentamers from recombinant HPV11 L1/L2 complexes in a pH-dependent manner. Furthermore, CyPs released L1 capsomeres from partially disassembled HPV16 pseudovirions at slightly acidic pH. Taken together, these data suggest that CyPs mediate the dissociation of HPV L1 and L2 capsid proteins following acidification of endocytic vesicles.

  11. Massive Corrections to Entanglement in Minimal E8 Toda Field Theory

    Directory of Open Access Journals (Sweden)

    Olalla A. Castro-Alvaredo

    2017-02-01

    Full Text Available In this letter we study the exponentially decaying corrections to saturation of the second R\\'enyi entropy of one interval of length L in minimal E8 Toda field theory. It has been known for some time that the entanglement entropy of a massive quantum field theory in 1+1 dimensions saturates to a constant value for m1 L <<1 where m1 is the mass of the lightest particle in the spectrum. Subsequently, results by Cardy, Castro-Alvaredo and Doyon have shown that there are exponentially decaying corrections to this behaviour which are characterised by Bessel functions with arguments proportional to m1 L. For the von Neumann entropy the leading correction to saturation takes the precise universal form -K0(2m1 L/8 whereas for the R\\'enyi entropies leading corrections which are proportional to K0(m1 L are expected. Recent numerical work by P\\'almai for the second R\\'enyi entropy of minimal E8 Toda has found next-to-leading order corrections decaying as exp(-2m1 L rather than the expected exp(-m1 L. In this paper we investigate the origin of this result and show that it is incorrect. An exact form factor computation of correlators of branch point twist fields reveals that the leading corrections are proportional to K0(m1 L as expected.

  12. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  13. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  14. Verbal Inflectional Morphology in L1 and L2 Spanish: A Frequency Effects Study Examining Storage versus Composition.

    Science.gov (United States)

    Bowden, Harriet Wood; Gelfand, Matthew P; Sanz, Cristina; Ullman, Michael T

    2010-02-17

    This study examines the storage vs. composition of Spanish inflected verbal forms in L1 and L2 speakers of Spanish. L2 participants were selected to have mid-to-advanced proficiency, high classroom experience, and low immersion experience, typical of medium-to-advanced foreign language learners. Participants were shown the infinitival forms of verbs from either Class I (the default class, which takes new verbs) or Classes II and III (non-default classes), and were asked to produce either first-person singular present-tense or imperfect forms, in separate tasks. In the present tense, the L1 speakers showed inflected-form frequency effects (i.e., higher frequency forms were produced faster, which is taken as a reflection of storage) for stem-changing (irregular) verb-forms from both Class I (e.g., pensar-pienso) and Classes II and III (e.g., perder-pierdo), as well as for non-stem-changing (regular) forms in Classes II/III (e.g., vender-vendo), in which the regular transformation does not appear to constitute a default. In contrast, Class I regulars (e.g., pescar-pesco), whose non-stem-changing transformation constitutes a default (e.g., it is applied to new verbs), showed no frequency effects. L2 speakers showed frequency effects for all four conditions (Classes I and II/III, regulars and irregulars). In the imperfect tense, the L1 speakers showed frequency effects for Class II/III (-ía-suffixed) but not Class I (-aba-suffixed) forms, even though both involve non-stem-change (regular) default transformations. The L2 speakers showed frequency effects for both types of forms. The pattern of results was not explained by a wide range of potentially confounding experimental and statistical factors, and does not appear to be compatible with single-mechanism models, which argue that all linguistic forms are learned and processed in associative memory. The findings are consistent with a dual-system view in which both verb class and regularity influence the storage vs

  15. hnRNP L regulates differences in expression of mouse integrin alpha2beta1.

    Science.gov (United States)

    Cheli, Yann; Kunicki, Thomas J

    2006-06-01

    There is a 2-fold variation in platelet integrin alpha2beta1 levels among inbred mouse strains. Decreased alpha2beta1 in 4 strains carrying Itga2 haplotype 2 results from decreased affinity of heterogeneous ribonucleoprotein L (hnRNP L) for a 6 CA repeat sequence (CA6) within intron 1. Seven strains bearing haplotype 1 and a 21 CA repeat sequence at this position (CA21) express twice the level of platelet alpha2beta1 and exhibit an equivalent gain of platelet function in vitro. By UV crosslinking and immunoprecipitation, hnRNP L binds more avidly to CA21, relative to CA6. By cell-free, in vitro mRNA splicing, decreased binding of hnRNP L results in decreased splicing efficiency and an increased proportion of alternatively spliced product. The splicing enhancer activity of CA21 in vivo is abolished by prior treatment with hnRNP L-specific siRNA. Thus, decreased surface alpha2beta1 results from decreased Itga2 pre-mRNA splicing regulated by hnRNP L and depends on CA repeat length at a specific site in intron 1.

  16. 40 CFR 721.4575 - L-aspartic acid, N,N′- [(1E) - 1,2 - ethenediylbis[(3-sulfo-4, 1-phenylene)imino [6-(phenylamino...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false L-aspartic acid, N,Nâ²- [(1E) - 1,2... Substances § 721.4575 L-aspartic acid, N,N′- [(1E) - 1,2 - ethenediylbis[(3-sulfo-4, 1-phenylene)imino [6... uses subject to reporting. (1) The chemical substance identified as l-aspartic acid, N,N′- [(1E) - 1,2...

  17. The Role of L1 Conceptual and Linguistic Knowledge and Frequency in the Acquisition of L2 Metaphorical Expressions

    Science.gov (United States)

    Türker, Ebru

    2016-01-01

    This study investigates how figurative language is processed by learners of a second language (L2). With an experiment testing L2 comprehension of figurative expressions in three categories, each combining shared and unshared first language (L1) and L2 lexical representations and conceptual representations in a different way, the study…

  18. ProxImaL: efficient image optimization using proximal algorithms

    KAUST Repository

    Heide, Felix

    2016-07-11

    Computational photography systems are becoming increasingly diverse, while computational resources-for example on mobile platforms-are rapidly increasing. As diverse as these camera systems may be, slightly different variants of the underlying image processing tasks, such as demosaicking, deconvolution, denoising, inpainting, image fusion, and alignment, are shared between all of these systems. Formal optimization methods have recently been demonstrated to achieve state-of-the-art quality for many of these applications. Unfortunately, different combinations of natural image priors and optimization algorithms may be optimal for different problems, and implementing and testing each combination is currently a time-consuming and error-prone process. ProxImaL is a domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety of linear and nonlinear image formation models and cost functions, advanced image priors, and noise models. The compiler intelligently chooses the best way to translate a problem formulation and choice of optimization algorithm into an efficient solver implementation. In applications to the image processing pipeline, deconvolution in the presence of Poisson-distributed shot noise, and burst denoising, we show that a few lines of ProxImaL code can generate highly efficient solvers that achieve state-of-the-art results. We also show applications to the nonlinear and nonconvex problem of phase retrieval.

  19. Tydskrif vir letterkunde - Vol 49, No 2 (2012)

    African Journals Online (AJOL)

    Hugo Claus als meervoud. Pleidooi voor een wetenschappelijke documentaire varianteneditie van Claus' verzamelde poëzie · EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. Y T'Sjoen, 150-159. http://dx.doi.org/10.4314/tvl.v49i2.11 ...

  20. New Developments in the SCIAMACHY L2 Ground Processor

    Science.gov (United States)

    Gretschany, Sergei; Lichtenberg, Günter; Meringer, Markus; Theys, Nicolas; Lerot, Christophe; Liebing, Patricia; Noel, Stefan; Dehn, Angelika; Fehr, Thorsten

    2016-04-01

    SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric ChartographY) aboard ESA's environmental satellite ENVISAT observed the Earth's atmosphere in limb, nadir, and solar/lunar occultation geometries covering the UV-Visible to NIR spectral range. It is a joint project of Germany, the Netherlands and Belgium and was launched in February 2002. SCIAMACHY doubled its originally planned in-orbit lifetime of five years before the communication to ENVISAT was severed in April 2012, and the mission entered its post-operational phase. In order to preserve the best quality of the outstanding data recorded by SCIAMACHY, data processors are still being updated. This presentation will highlight three new developments that are currently being incorporated into the forthcoming Version 7 of ESA's operational Level 2 processor: 1. Tropospheric BrO, a new retrieval based on the scientific algorithm of (Theys et al., 2011). This algorithm had been originally developed for the GOME-2 sensor and later adapted for SCIAMACHY. The main principle of the new algorithm is to utilize BrO total columns (already an operational product) and split them into stratospheric VCDstrat and tropospheric VCDtrop fractions. BrO VCDstrat is determined from a climatological approach, driven by SCIAMACHY O3 and NO2 observations. VCDtrop is then determined simply as a difference: VCDtrop = VCDtotal - VCDstrat. 2. Improved cloud flagging using limb measurements (Liebing, 2015). Limb cloud flags are already part of the SCIAMACHY L2 product. They are currently calculated employing the scientific algorithm developed by (Eichmann et al., 2015). Clouds are categorized into four types: water, ice, polar stratospheric and noctilucent clouds. High atmospheric aerosol loadings, however, often lead to spurious cloud flags, when aerosols had been misidentified as clouds. The new algorithm will better discriminate between aerosol and clouds. It will also have a higher sensitivity w.r.t. thin clouds. 3. A new

  1. Minimal Left-Right Symmetric Dark Matter.

    Science.gov (United States)

    Heeck, Julian; Patra, Sudhanwa

    2015-09-18

    We show that left-right symmetric models can easily accommodate stable TeV-scale dark matter particles without the need for an ad hoc stabilizing symmetry. The stability of a newly introduced multiplet either arises accidentally as in the minimal dark matter framework or comes courtesy of the remaining unbroken Z_{2} subgroup of B-L. Only one new parameter is introduced: the mass of the new multiplet. As minimal examples, we study left-right fermion triplets and quintuplets and show that they can form viable two-component dark matter. This approach is, in particular, valid for SU(2)×SU(2)×U(1) models that explain the recent diboson excess at ATLAS in terms of a new charged gauge boson of mass 2 TeV.

  2. Minimal lethal concentration of hyrgromycin B in calli induction and shoot multiplication process of Digitalis purpurea L.

    Directory of Open Access Journals (Sweden)

    Elizabeth Kairúz Hernández-Díaz

    2013-01-01

    Full Text Available The plants of the genus Digitalis are characterized by the production of cardenolides, drugs widely used worldwide in the treatment of heart failure. In previous research a transformation protocol was developed from leaf disc of Digitalis purpurea L., using geneticin as selection marker. However some escapes in the selection process were obtained. So it is necessary to develop a more efficient selection scheme using another selective agent. Therefore, the aim of the present research was to select the minimum lethal concentration of hygromycin B during callus induction and shoots multiplication of D. purpurea. For callus induction we studied five concentrations of hygromycine B (3, 6, 9, 12, 15 mg l-1 during 28 days. Besides, the effect in shoot multiplication of four concentrations of hygromycine B (25, 50, 75, 100 mg l-1 was studied during 30 days. The minimal lethal concentration for callus formation was 12 mg l-1. In the case of shoot multiplication, 100% mortality was showed at 75 mg l-1 strictly after 30 days. The proposed selection scheme is recommended for future work at genetic transformation in this species. Keywords: cardenolides, genetic transformation, hpt, selection

  3. L2TTMON Monitoring Program for L2 Topological Trigger in H1 Experiment - User's Guide

    International Nuclear Information System (INIS)

    Banas, E.; Ducorps, A.

    1999-01-01

    Monitoring software for the L2 Topological Trigger in H1 experiment consists of two parts working on two different computers. The hardware read-out and data processing is done on a fast FIC 8234 computer working with the OS9 real time operating system. The Macintosh Quadra is used as a Graphic User Interface for accessing the OS9 trigger monitoring software. The communication between both computers is based on the parallel connection between the Macintosh and the VME crate, where the FIC computer is placed. The special designed protocol (client-server) is used to communicate between both nodes. The general scheme of monitoring for the L2 Topological Trigger and detailed description of using of the monitoring software in both nodes are given in this guide. (author)

  4. Search for standard and non-minimal Higgs boson at LEP2 with the L3 detector

    International Nuclear Information System (INIS)

    Teyssier, Daniel

    2002-01-01

    This thesis worked out in the frame of L3 collaboration - Higgs working group refers to the search for a Higgs signature, for center-of-mass energies between 192 and 209 GeV, one of the main goals of LEP2. It consists of a contribution to the analyses looking for the Standard Model Higgs boson, especially in the so-called 'two jets plus missing energy' channel. The final state of this channel, denoted H νν -bar, is characterized by the production of a pair of b quarks, from the decay of the Higgs particle, and a neutrino pair from that of the Z particle, for the prevalent Higgs-Strahlung process. The lower observed mass limit, obtained with the H νν -bar channel alone, is 96 GeV at a 95% confidence level. In addition, searches for neutral scalar particle production are presented, in the context of general two-Higgs-doublet models of type II, by means of a so-called 'flavor independent' analysis. Searches for 'invisible' Higgs bosons are presented as well, with the Z boson decaying into a pair of fermions while the Higgs boson decays into undetectable particles. These results permit constraining the parameters of the minimal non-universal supersymmetric models (without gaugino mass unification). (author)

  5. Philips LTC 2009/51

    CERN Multimedia

    1999-01-01

    It was part of a range of high-performance monitors (computers screens) that were associated with other units such as Philip's video surveillance systems, cameras or transmission and control equipment. Included in this range of Philips monitors are LTC 2009 (like this one), LTC 2012, LTC 2017 and LTC 2020 Series monochrome monitors. They offer high-performance images with a resolution of 900 TVL (TV-Lines), or in the case of the LTC 2017 monitor, 700 TVL, making them ideal for remote viewing and video applications. The monitor housing consists of a robust rectangular metal case which minimizes interference from external signals and allows “stacking” of monitors when used in large numbers.

  6. Orthogonal Multiwavelet Frames in L2Rd

    Directory of Open Access Journals (Sweden)

    Liu Zhanwei

    2012-01-01

    Full Text Available We characterize the orthogonal frames and orthogonal multiwavelet frames in L2Rd with matrix dilations of the form (Df(x=detAf(Ax, where A is an arbitrary expanding d×d matrix with integer coefficients. Firstly, through two arbitrarily multiwavelet frames, we give a simple construction of a pair of orthogonal multiwavelet frames. Then, by using the unitary extension principle, we present an algorithm for the construction of arbitrarily many orthogonal multiwavelet tight frames. Finally, we give a general construction algorithm for orthogonal multiwavelet tight frames from a scaling function.

  7. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  8. The Effects of Giving and Receiving Marginal L1 Glosses on L2 Vocabulary Learning by Upper Secondary Learners

    Science.gov (United States)

    Samian, Hosein Vafadar; Foo, Thomas Chow Voon; Mohebbi, Hassan

    2016-01-01

    This paper reports the findings of a study that investigated the effect of giving and receiving marginal L1 glosses on L2 vocabulary learning. To that end, forty nine Iranian learners of English were assigned to three different experimental conditions including marginal L1 glosses Giver (n = 17), marginal L1 glosses Receiver (n = 17), and no…

  9. emMAW: computing minimal absent words in external memory.

    Science.gov (United States)

    Héliou, Alice; Pissis, Solon P; Puglisi, Simon J

    2017-09-01

    The biological significance of minimal absent words has been investigated in genomes of organisms from all domains of life. For instance, three minimal absent words of the human genome were found in Ebola virus genomes. There exists an O(n) -time and O(n) -space algorithm for computing all minimal absent words of a sequence of length n on a fixed-sized alphabet based on suffix arrays. A standard implementation of this algorithm, when applied to a large sequence of length n , requires more than 20 n  bytes of RAM. Such memory requirements are a significant hurdle to the computation of minimal absent words in large datasets. We present emMAW, the first external-memory algorithm for computing minimal absent words. A free open-source implementation of our algorithm is made available. This allows for computation of minimal absent words on far bigger data sets than was previously possible. Our implementation requires less than 3 h on a standard workstation to process the full human genome when as little as 1 GB of RAM is made available. We stress that our implementation, despite making use of external memory, is fast; indeed, even on relatively smaller datasets when enough RAM is available to hold all necessary data structures, it is less than two times slower than state-of-the-art internal-memory implementations. https://github.com/solonas13/maw (free software under the terms of the GNU GPL). alice.heliou@lix.polytechnique.fr or solon.pissis@kcl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Informational Packaging, Level of Formality, and the Use of Circumstance Adverbials in L1 and L2 Student Academic Presentations

    Science.gov (United States)

    Zareva, Alla

    2009-01-01

    The analysis of circumstance adverbials in this paper was based on L1 and L2 corpora of student presentations, each of which consisting of approximately 30,000 words. The overall goal of the investigation was to identify specific functions L1 and L2 college students attributed to circumstance adverbials (the most frequently used adverbial class in…

  11. SO(2 ell + 1) contains ? contains SO/sub L/(3) in group chains for L-S coupling

    International Nuclear Information System (INIS)

    Wu, Z.Y.; Sun, C.P.; Zhang, L.; Li, B.F.

    1986-01-01

    Racah pointed out in his 1949 article that there exists a proper subgroup of SO(7) which properly contains SO/sub L/(3) in group chains for L-S coupling. This paper investigates whether such a proper subgroup exists for SO(2l + 1) which contains SO/sub L/(3) for an arbitrary l and concludes that this subgroup exists only for the case in which l is equal to 3. 4 references

  12. An l1-TV Algorithm for Deconvolution with Salt and Pepper Noise

    Science.gov (United States)

    2009-04-01

    deblurring in the presence of impulsive noise ,” Int. J. Comput. Vision, vol. 70, no. 3, pp. 279–298, Dec. 2006. [13] A. E. Beaton and J. W. Tukey, “The...AN 1-TV ALGORITHM FOR DECONVOLUTIONWITH SALT AND PEPPER NOISE Brendt Wohlberg∗ T-7 Mathematical Modeling and Analysis Los Alamos National Laboratory...and pepper noise , but the extension of this formulation to more general prob- lems, such as deconvolution, has received little attention. We consider

  13. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    Science.gov (United States)

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L₁ algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  14. Measurement of neutralizing serum antibodies of patients vaccinated with human papillomavirus L1 or L2-based immunogens using furin-cleaved HPV Pseudovirions.

    Directory of Open Access Journals (Sweden)

    Joshua W Wang

    Full Text Available Antibodies specific for neutralizing epitopes in either Human papillomavirus (HPV capsid protein L1 or L2 can mediate protection from viral challenge and thus their accurate and sensitive measurement at high throughput is likely informative for monitoring response to prophylactic vaccination. Here we compare measurement of L1 and L2-specific neutralizing antibodies in human sera using the standard Pseudovirion-Based Neutralization Assay (L1-PBNA with the newer Furin-Cleaved Pseudovirion-Based Neutralization Assay (FC-PBNA, a modification of the L1-PBNA intended to improve sensitivity towards L2-specific neutralizing antibodies without compromising assay of L1-specific responses. For detection of L1-specific neutralizing antibodies in human sera, the FC- PBNA and L1-PBNA assays showed similar sensitivity and a high level of correlation using WHO standard sera (n = 2, and sera from patients vaccinated with Gardasil (n = 30 or an experimental human papillomavirus type 16 (HPV16 L1 VLP vaccine (n = 70. The detection of L1-specific cross-neutralizing antibodies in these sera using pseudovirions of types phylogenetically-related to those targeted by the L1 virus-like particle (VLP vaccines was also consistent between the two assays. However, for sera from patients (n = 17 vaccinated with an L2-based immunogen (TA-CIN, the FC-PBNA was more sensitive than the L1-PBNA in detecting L2-specific neutralizing antibodies. Further, the neutralizing antibody titers measured with the FC-PBNA correlated with those determined with the L2-PBNA, another modification of the L1-PBNA that spacio-temporally separates primary and secondary receptor engagement, as well as the protective titers measured using passive transfer studies in the murine genital-challenge model. In sum, the FC-PBNA provided sensitive measurement for both L1 VLP and L2-specific neutralizing antibody in human sera. Vaccination with TA-CIN elicits weak cross-protective antibody in a

  15. Dialogic spaces of knowledge construction in research article Conclusion sections written by English L1, English L2 and Spanish L1 writers

    Directory of Open Access Journals (Sweden)

    Elena Sheldon

    2018-04-01

    Full Text Available While vast research efforts have been directed to the identification of moves and their constituent steps in research articles (RA, less attention has been paid to the social negotiation of knowledge, in particular in the Conclusion section of RAs. In this paper, I examine the Conclusion sections of RAs in English and Spanish, including RA Conclusions written in English by Spanish-background speakers in the field of applied linguistics. This study brings together two complementary frameworks, genre-based knowledge and evaluative stance, drawing on Swales’s (1990, 2004 move analysis framework and on the engagement system in Martin and White’s (2005 Appraisal framework. The results indicate that the English L1 group negotiates a consistent space for readers to approve or disapprove the writers’ propositions. However, the Spanish L1 group aligns with readers, using a limited space through contracting resources, which may be because this group addresses a smaller audience in comparison to the English L1 group which addresses an international readership. On the other hand, the English L2 group tends to move towards English rhetorical international practice, but without fully abandoning their SpL1. These results contribute to gaining a better understanding of how successful scholarly writing in English is achieved, and offers important insights for teaching multilingual researchers.

  16. On SW-minimal models and N=1 supersymmetric quantum Toda-field theories

    International Nuclear Information System (INIS)

    Mallwitz, S.

    1994-04-01

    Integrable N=1 supersymmetric Toda-field theories are determined by a contragredient simple Super-Lie-Algebra (SSLS) with purely fermionic lowering and raising operators. For the SSLA's Osp(3/2) and D(2/1;α) we construct explicitly the higher spin conserved currents and obtain free field representations of the super W-algebras SW(3/2,2) and SW(3/2,3/2,2). In constructing the corresponding series of minimal models using covariant vertex operators, we find a necessary restriction on the Cartan matrix of the SSLA, also for the general case. Within this framework, this restriction claims that there be a minimum of one non-vanishing element on the diagonal of the Cartan matrix. This condition is without parallel in bosonic conformal field theory. As a consequence only two series of SSLA's yield minimal models, namely Osp(2n/2n-1) and Osp(2n/2n+1). Subsequently some general aspects of degenerate representations of SW-algebras, notably the fusion rules, are investigated. As an application we discuss minimal models of SW(3/2, 2), which were constructed with independent methods, in this framework. Covariant formulation is used throughout this paper. (orig.)

  17. hnRNP L regulates differences in expression of mouse integrin α2β1

    Science.gov (United States)

    Cheli, Yann; Kunicki, Thomas J.

    2006-01-01

    There is a 2-fold variation in platelet integrin α2β1 levels among inbred mouse strains. Decreased α2β1 in 4 strains carrying Itga2 haplotype 2 results from decreased affinity of heterogeneous ribonucleoprotein L (hnRNP L) for a 6 CA repeat sequence (CA6) within intron 1. Seven strains bearing haplotype 1 and a 21 CA repeat sequence at this position (CA21) express twice the level of platelet α2β1 and exhibit an equivalent gain of platelet function in vitro. By UV crosslinking and immunoprecipitation, hnRNP L binds more avidly to CA21, relative to CA6. By cell-free, in vitro mRNA splicing, decreased binding of hnRNP L results in decreased splicing efficiency and an increased proportion of alternatively spliced product. The splicing enhancer activity of CA21 in vivo is abolished by prior treatment with hnRNP L–specific siRNA. Thus, decreased surface α2β1 results from decreased Itga2 pre-mRNA splicing regulated by hnRNP L and depends on CA repeat length at a specific site in intron 1. PMID:16455949

  18. Plasma chitinase 3-like 1 is persistently elevated during first month after minimally invasive colorectal cancer resection

    Institute of Scientific and Technical Information of China (English)

    HMC Shantha Kumara; David Gaita; Hiromichi Miyagaki; Xiaohong Yan; Sonali AC Hearth; Linda Njoh; Vesna Cekic; Richard L Whelan

    2016-01-01

    AIM: To assess blood chitinase 3-like 1(CHi3L1) levels for 2 mo after minimally invasive colorectal resection(MICR) for colorectal cancer(CRC). METHODS: CRC patients in an Institutional Review Board approved data/plasma bank who underwent elective MICR for whom preoperative(PreO p), early postoperative(PostO p), and 1 or more late PostO p samples [postoperative day(POD) 7-27] available were included. Plasma CHi3L1 levels(ng/m L) were determined in duplicate by enzyme linked immunosorbent assay. RESULTS: PreOp and PostOp plasma sample were available for 80 MICR cancer patients for the study. The median PreOp CHi3L1 level was 56.8 CI: 41.9-78.6 ng/mL(n = 80). Significantly elevated(P < 0.001) median plasma levels(ng/mL) over PreOp levels were detected on POD1(667.7 CI: 495.7, 771.7; n = 79), POD 3(132.6 CI: 95.5, 173.7; n = 76), POD7-13(96.4 CI: 67.7, 136.9; n = 62), POD14-20(101.4 CI: 80.7, 287.4; n = 22), and POD 21-27(98.1 CI: 66.8, 137.4; n = 20, P = 0.001). No significant difference in plasma levels were noted on POD27-41. CONCLUSION: Plasma CHi3L1 levels were significantly elevated for one month after MICR. Persistently elevated plasma CHi3L1 may support the growth of residual tumor and metastasis.

  19. The PD1:PD-L1/2 Pathway from Discovery to Clinical Implementation.

    Science.gov (United States)

    Bardhan, Kankana; Anagnostou, Theodora; Boussiotis, Vassiliki A

    2016-01-01

    The immune system maintains a critically organized network to defend against foreign particles, while evading self-reactivity simultaneously. T lymphocytes function as effectors and play an important regulatory role to orchestrate the immune signals. Although central tolerance mechanism results in the removal of the most of the autoreactive T cells during thymic selection, a fraction of self-reactive lymphocytes escapes to the periphery and pose a threat to cause autoimmunity. The immune system evolved various mechanisms to constrain such autoreactive T cells and maintain peripheral tolerance, including T cell anergy, deletion, and suppression by regulatory T cells (T Regs ). These effects are regulated by a complex network of stimulatory and inhibitory receptors expressed on T cells and their ligands, which deliver cell-to-cell signals that dictate the outcome of T cell encountering with cognate antigens. Among the inhibitory immune mediators, the pathway consisting of the programed cell death 1 (PD-1) receptor (CD279) and its ligands PD-L1 (B7-H1, CD274) and PD-L2 (B7-DC, CD273) plays an important role in the induction and maintenance of peripheral tolerance and for the maintenance of the stability and the integrity of T cells. However, the PD-1:PD-L1/L2 pathway also mediates potent inhibitory signals to hinder the proliferation and function of T effector cells and have inimical effects on antiviral and antitumor immunity. Therapeutic targeting of this pathway has resulted in successful enhancement of T cell immunity against viral pathogens and tumors. Here, we will provide a brief overview on the properties of the components of the PD-1 pathway, the signaling events regulated by PD-1 engagement, and their consequences on the function of T effector cells.

  20. The PD1: PD-L1/2 pathway from discovery to clinical implementation

    Directory of Open Access Journals (Sweden)

    Kankana Bardhan

    2016-12-01

    Full Text Available The immune system has the difficult challenge of discerning and defending against a diversity of microbial pathogens, while simultaneously avoiding self-reactivity. T lymphocytes function as effectors and regulators of the immune system. While central tolerance mechanism results in deletion of the majority of self-reactive T lymphocytes during thymic selection, a fraction of self reactive lymphocytes escapes to the periphery and retains the potential to inflict destructive autoimmune pathology. The immune system evolved various mechanisms to restrain such auto-reactive T cells and maintain peripheral tolerance, including T cell anergy, deletion, and suppression by regulatory T cells (TRegs. These effects are regulated by a complex network of stimulatory and inhibitory receptors expressed on T cells and their ligands, which deliver cell-to-cell signals that dictate the outcome of T cell encountering with cognate antigens. Among the inhibitory immune mediators, the pathway consisting of the programmed cell death 1 (PD-1 receptor (CD279 and its ligands PD-L1 (B7-H1, CD274 and PD-L2 (B7-DC; CD273 plays a vital role in the induction and maintenance of peripheral tolerance and for the maintenance of T cell homeostasis. In contrast to its beneficial role in self-tolerance, the PD-1: PD-L1/L2 pathway mediates potent inhibitory signals that prevent the expansion and function of T effector cells and have detrimental effects on antiviral and antitumor immunity. Therapeutic targeting of this pathway has resulted in successful enhancement of T cell immunity against viral pathogens and tumors. Here, we will provide a brief overview on the properties of the components of the PD-1 pathway, the signaling events that are regulated by PD-1 triggering, and their consequences on the function of T effector cells.

  1. Scheduling algorithms for saving energy and balancing load

    Energy Technology Data Exchange (ETDEWEB)

    Antoniadis, Antonios

    2012-08-03

    In this thesis we study problems of scheduling tasks in computing environments. We consider both the modern objective function of minimizing energy consumption, and the classical objective of balancing load across machines. We first investigate offline deadline-based scheduling in the setting of a single variable-speed processor that is equipped with a sleep state. The objective is that of minimizing the total energy consumption. Apart from settling the complexity of the problem by showing its NP-hardness, we provide a lower bound of 2 for general convex power functions, and a particular natural class of schedules called s{sub crit}-schedules. We also present an algorithmic framework for designing good approximation algorithms. For general convex power functions our framework improves the best known approximation-factor from 2 to 4/3. This factor can be reduced even further to 137/117 for a specific well-motivated class of power functions. Furthermore, we give tight bounds to show that our framework returns optimal s{sub crit}-schedules for the two aforementioned power-function classes. We then focus on the multiprocessor setting where each processor has the ability to vary its speed. Job migration is allowed, and we again consider classical deadline-based scheduling with the objective of energy minimization. We first study the offline problem and show that optimal schedules can be computed efficiently in polynomial time for any convex and non-decreasing power function. Our algorithm relies on repeated maximum flow computations. Regarding the online problem and power functions P(s) = s{sup {alpha}}, where s is the processor speed and {alpha} > 1 a constant, we extend the two well-known single-processor algorithms Optimal Available and Average Rate. We prove that Optimal Available is {alpha}{sup {alpha}}-competitive as in the single-processor case. For Average Rate we show a competitive factor of (2{alpha}){sup {alpha}}/2 + 1, i.e., compared to the single

  2. Critical contrastive rhetoric: The influence of L2 letter writing instruction on L1letter writing

    Directory of Open Access Journals (Sweden)

    Mehrnoosh Fakharzadeh

    2014-08-01

    Full Text Available The present study employed critical contrastive rhetoric to investigate the L2 to L1 transfer of organizational pattern and directness level of speech acts in business complaint letters. By examining the L1 complaint letters of 30 tourism university students in two phases of study, pre and post instruction of English complaint letter, the study revealed that the rhetorical organization of Persian letters are in a state of hybridity. The post instruction comparison of letters, however, showed a tendency towards applying English conventions both in organization and directness level of complaint speech act in the L1 complaint letters. The results also revealed that after instruction the expert in the field of tourism viewed some letters as inappropriate in terms of politeness which is reflected through some lexical items.

  3. A fast sparse reconstruction algorithm for electrical tomography

    International Nuclear Information System (INIS)

    Zhao, Jia; Xu, Yanbin; Tan, Chao; Dong, Feng

    2014-01-01

    Electrical tomography (ET) has been widely investigated due to its advantages of being non-radiative, low-cost and high-speed. However, the image reconstruction of ET is a nonlinear and ill-posed inverse problem and the imaging results are easily affected by measurement noise. A sparse reconstruction algorithm based on L 1 regularization is robust to noise and consequently provides a high quality of reconstructed images. In this paper, a sparse reconstruction by separable approximation algorithm (SpaRSA) is extended to solve the ET inverse problem. The algorithm is competitive with the fastest state-of-the-art algorithms in solving the standard L 2L 1 problem. However, it is computationally expensive when the dimension of the matrix is large. To further improve the calculation speed of solving inverse problems, a projection method based on the Krylov subspace is employed and combined with the SpaRSA algorithm. The proposed algorithm is tested with image reconstruction of electrical resistance tomography (ERT). Both simulation and experimental results demonstrate that the proposed method can reduce the computational time and improve the noise robustness for the image reconstruction. (paper)

  4. Multi-instantons in R4 and Minimal Surfaces in R2,1

    International Nuclear Information System (INIS)

    Tekin, Bayram

    2000-01-01

    It is known that self-duality equations for multi-instantons on a line in four dimensions are equivalent to minimal surface equations in three dimensional Minkowski space. We extend this equivalence beyond the equations of motion and show that topological number, instanton moduli space and anti-self-dual solutions have representations in terms of minimal surfaces. The issue of topological charge is quite subtle because the surfaces that appear are non-compact. This minimal surface/instanton correspondence allows us to define a metric on the configuration space of the gauge fields. We obtain the minimal surface representation of an instanton with arbitrary charge. The trivial vacuum and the BPST instanton as minimal surfaces are worked out in detail. BPS monopoles and the geodesics are also discussed. (author)

  5. A unified framework for penalized statistical muon tomography reconstruction with edge preservation priors of l{sub p} norm type

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Baihui [Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Zhao, Ziran, E-mail: zhaozr@mail.tsinghua.edu.cn [Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Wang, Xuewu; Wu, Dufan; Zeng, Zhi; Zeng, Ming; Wang, Yi [Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Cheng, Jianping, E-mail: chengjp@mail.tsinghua.edu.cn [Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China)

    2016-01-11

    The Tsinghua University MUon Tomography facilitY (TUMUTY) has been built up and it is utilized to reconstruct the special objects with complex structure. Since fine image is required, the conventional Maximum likelihood Scattering and Displacement (MLSD) algorithm is employed. However, due to the statistical characteristics of muon tomography and the data incompleteness, the reconstruction is always instable and accompanied with severe noise. In this paper, we proposed a Maximum a Posterior (MAP) algorithm for muon tomography regularization, where an edge-preserving prior on the scattering density image is introduced to the object function. The prior takes the l{sub p} norm (p>0) of the image gradient magnitude, where p=1 and p=2 are the well-known total-variation (TV) and Gaussian prior respectively. The optimization transfer principle is utilized to minimize the object function in a unified framework. At each iteration the problem is transferred to solving a cubic equation through paraboloidal surrogating. To validate the method, the French Test Object (FTO) is imaged by both numerical simulation and TUMUTY. The proposed algorithm is used for the reconstruction where different norms are detailedly studied, including l{sub 2}, l{sub 1}, l{sub 0.5}, and an l{sub 2–0.5} mixture norm. Compared with MLSD method, MAP achieves better image quality in both structure preservation and noise reduction. Furthermore, compared with the previous work where one dimensional image was acquired, we achieve the relatively clear three dimensional images of FTO, where the inner air hole and the tungsten shell is visible.

  6. Minimally invasive myotomy for the treatment of esophageal achalasia: evolution of the surgical procedure and the therapeutic algorithm.

    Science.gov (United States)

    Bresadola, Vittorio; Feo, Carlo V

    2012-04-01

    Achalasia is a rare disease of the esophagus, characterized by the absence of peristalsis in the esophageal body and incomplete relaxation of the lower esophageal sphincter, which may be hypertensive. The cause of this disease is unknown; therefore, the aim of the therapy is to improve esophageal emptying by eliminating the outflow resistance caused by the lower esophageal sphincter. This goal can be accomplished either by pneumatic dilatation or surgical myotomy, which are the only long-term effective therapies for achalasia. Historically, pneumatic dilatation was preferred over surgical myotomy because of the morbidity associated with a thoracotomy or a laparotomy. However, with the development of minimally invasive techniques, the surgical approach has gained widespread acceptance among patients and gastroenterologists and, consequently, the role of surgery has changed. The aim of this study was to review the changes occurred in the surgical treatment of achalasia over the last 2 decades; specifically, the development of minimally invasive techniques with the evolution from a thoracoscopic approach without an antireflux procedure to a laparoscopic myotomy with a partial fundoplication, the changes in the length of the myotomy, and the modification of the therapeutic algorithm.

  7. R-134a (1,1,1,2-Tetrafluoroethane) Inhalation Induced Reactive Airways Dysfunction Syndrome.

    Science.gov (United States)

    Doshi, Viral; Kham, Nang; Kulkarni, Shreedhar; Kapitan, Kent; Henkle, Joseph; White, Peter

    2016-01-01

    R-134a (1,1,1,2-tetrafluoroethane) is widely used as a refrigerant and as an aerosol propellant. Inhalation of R-134a can lead to asphyxia, transient confusion, and cardiac arrhythmias. We report a case of reactive airways dysfunction syndrome secondary to R-134a inhalation. A 60-year-old nonsmoking man without a history of lung disease was exposed to an air conditioner refrigerant spill while performing repairs beneath a school bus. Afterward, he experienced worsening shortness of breath with minimal exertion, a productive cough, and wheezing. He was also hypoxic. He was admitted to the hospital for further evaluation. Spirometry showed airflow obstruction with an FEV1 1.97 L (45% predicted). His respiratory status improved with bronchodilators and oral steroids. A repeat spirometry 2 weeks later showed improvement with an FEV1 2.5 L (60% predicted). Six months after the incident, his symptoms had improved, but he was still having shortness of breath on exertion and occasional cough.

  8. Prolonged exercise in type 1 diabetes: performance of a customizable algorithm to estimate the carbohydrate supplements to minimize glycemic imbalances.

    Science.gov (United States)

    Francescato, Maria Pia; Stel, Giuliana; Stenner, Elisabetta; Geat, Mario

    2015-01-01

    Physical activity in patients with type 1 diabetes (T1DM) is hindered because of the high risk of glycemic imbalances. A recently proposed algorithm (named Ecres) estimates well enough the supplemental carbohydrates for exercises lasting one hour, but its performance for prolonged exercise requires validation. Nine T1DM patients (5M/4F; 35-65 years; HbA1c 54 ± 13 mmol · mol(-1)) performed, under free-life conditions, a 3-h walk at 30% heart rate reserve while insulin concentrations, whole-body carbohydrate oxidation rates (determined by indirect calorimetry) and supplemental carbohydrates (93% sucrose), together with glycemia, were measured every 30 min. Data were subsequently compared with the corresponding values estimated by the algorithm. No significant difference was found between the estimated insulin concentrations and the laboratory-measured values (p = NS). Carbohydrates oxidation rate decreased significantly with time (from 0.84 ± 0.31 to 0.53 ± 0.24 g · min(-1), respectively; p < 0.001), being estimated well enough by the algorithm (p = NS). Estimated carbohydrates requirements were practically equal to the corresponding measured values (p = NS), the difference between the two quantities amounting to -1.0 ± 6.1 g, independent of the elapsed exercise time (time effect, p = NS). Results confirm that Ecres provides a satisfactory estimate of the carbohydrates required to avoid glycemic imbalances during moderate intensity aerobic physical activity, opening the prospect of an intriguing method that could liberate patients from the fear of exercise-induced hypoglycemia.

  9. Prolonged exercise in type 1 diabetes: performance of a customizable algorithm to estimate the carbohydrate supplements to minimize glycemic imbalances.

    Directory of Open Access Journals (Sweden)

    Maria Pia Francescato

    Full Text Available Physical activity in patients with type 1 diabetes (T1DM is hindered because of the high risk of glycemic imbalances. A recently proposed algorithm (named Ecres estimates well enough the supplemental carbohydrates for exercises lasting one hour, but its performance for prolonged exercise requires validation. Nine T1DM patients (5M/4F; 35-65 years; HbA1c 54 ± 13 mmol · mol(-1 performed, under free-life conditions, a 3-h walk at 30% heart rate reserve while insulin concentrations, whole-body carbohydrate oxidation rates (determined by indirect calorimetry and supplemental carbohydrates (93% sucrose, together with glycemia, were measured every 30 min. Data were subsequently compared with the corresponding values estimated by the algorithm. No significant difference was found between the estimated insulin concentrations and the laboratory-measured values (p = NS. Carbohydrates oxidation rate decreased significantly with time (from 0.84 ± 0.31 to 0.53 ± 0.24 g · min(-1, respectively; p < 0.001, being estimated well enough by the algorithm (p = NS. Estimated carbohydrates requirements were practically equal to the corresponding measured values (p = NS, the difference between the two quantities amounting to -1.0 ± 6.1 g, independent of the elapsed exercise time (time effect, p = NS. Results confirm that Ecres provides a satisfactory estimate of the carbohydrates required to avoid glycemic imbalances during moderate intensity aerobic physical activity, opening the prospect of an intriguing method that could liberate patients from the fear of exercise-induced hypoglycemia.

  10. An analytical study of composite laminate lay-up using search algorithms for maximization of flexural stiffness and minimization of springback angle

    Science.gov (United States)

    Singh, Ranjan Kumar; Rinawa, Moti Lal

    2018-04-01

    The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.

  11. ZFP36L1 and ZFP36L2 inhibit cell proliferation in a cyclin D-dependent and p53-independent manner

    OpenAIRE

    Suk, Fat-Moon; Chang, Chi-Ching; Lin, Ren-Jye; Lin, Shyr-Yi; Liu, Shih-Chen; Jau, Chia-Feng; Liang, Yu-Chih

    2018-01-01

    ZFP36 family members include ZFP36, ZFP36L1, and ZFP36L2, which belong to CCCH-type zinc finger proteins with two tandem zinc finger (TZF) regions. Whether ZFP36L1 and ZFP36L2 have antiproliferative activities similar to that of ZFP36 is unclear. In this study, when ZFP36L1 or ZFP36L2 was overexpressed in T-REx-293 cells, cell proliferation was dramatically inhibited and the cell cycle was arrested at the G1 phase. The levels of cell-cycle-related proteins, including cyclin B, cyclin D, cycli...

  12. An Interface Tracking Algorithm for the Porous Medium Equation.

    Science.gov (United States)

    1983-03-01

    equation (1.11). N [v n n 2(2) = n . AV k + wk---IY" 2] +l~ x A t K Ax E E 2+ VeTA i;- 2k1 n- o (nr+l) <k-<.(n+l) N [Av] [ n+l <Ax Z m(v ) I~+lIAxAt...RD-R127 685 AN INTERFACE TRACKING ALGORITHM FOR THE POROUS MEDIUM / EQURTION(U) WISCONSIN UNIV-MRDISON MATHEMATICS RESEARCH CENTER E DIBENEDETTO ET...RL. MAR 83 NRC-TSR-249 UNCLASSIFIED DAG29-88-C-8041 F/G 12/1i N E -EEonshhhhI EhhhMhhhhhhhhE mhhhhhhhhhhhhE mhhhhhhhhhhhhI IMhhhhhhhMhhhE

  13. A speedup technique for (l, d-motif finding algorithms

    Directory of Open Access Journals (Sweden)

    Dinh Hieu

    2011-03-01

    Full Text Available Abstract Background The discovery of patterns in DNA, RNA, and protein sequences has led to the solution of many vital biological problems. For instance, the identification of patterns in nucleic acid sequences has resulted in the determination of open reading frames, identification of promoter elements of genes, identification of intron/exon splicing sites, identification of SH RNAs, location of RNA degradation signals, identification of alternative splicing sites, etc. In protein sequences, patterns have proven to be extremely helpful in domain identification, location of protease cleavage sites, identification of signal peptides, protein interactions, determination of protein degradation elements, identification of protein trafficking elements, etc. Motifs are important patterns that are helpful in finding transcriptional regulatory elements, transcription factor binding sites, functional genomics, drug design, etc. As a result, numerous papers have been written to solve the motif search problem. Results Three versions of the motif search problem have been proposed in the literature: Simple Motif Search (SMS, (l, d-motif search (or Planted Motif Search (PMS, and Edit-distance-based Motif Search (EMS. In this paper we focus on PMS. Two kinds of algorithms can be found in the literature for solving the PMS problem: exact and approximate. An exact algorithm identifies the motifs always and an approximate algorithm may fail to identify some or all of the motifs. The exact version of PMS problem has been shown to be NP-hard. Exact algorithms proposed in the literature for PMS take time that is exponential in some of the underlying parameters. In this paper we propose a generic technique that can be used to speedup PMS algorithms. Conclusions We present a speedup technique that can be used on any PMS algorithm. We have tested our speedup technique on a number of algorithms. These experimental results show that our speedup technique is indeed very

  14. Radiation processing of minimally processed vegetables and aromatic plants

    International Nuclear Information System (INIS)

    Trigo, M.J.; Sousa, M.B.; Sapata, M.M.; Ferreira, A.; Curado, T.; Andrada, L.; Botelho, M.L.; Veloso, M.G.

    2009-01-01

    Vegetables are an essential part of people's diet all around the world. Due to cultivate techniques and handling after harvest, these products, may contain high microbial load that can cause food borne outbreaks. The irradiation of minimally processed vegetables is an efficient way to reduce the level of microorganisms and to inhibit parasites, helping a safe global trade. Evaluation of the irradiation's effects was carried out in minimal processed vegetables, as coriander (Coriandrum sativum L.), mint (Mentha spicata L.), parsley (Petroselinum crispum Mill, (A.W. Hill)), lettuce (Lactuca sativa L.) and watercress (Nasturium officinale L.). The inactivation level of natural microbiota and the D 10 values of Escherichia coli O157:H7 and Listeria innocua in these products were determined. The physical-chemical and sensorial characteristics before and after irradiation at a range of 0.5 up to 2.0 kGy applied doses were also evaluated. No differences were verified in the overall of sensorial and physical properties after irradiation up to 1 kGy, a decrease of natural microbiota was noticed (≥2 log). Based on the determined D 10 , the amount of radiation necessary to kill 10 5 E. coli and L. innocua was between 0.70 and 1.55 kGy. Shelf life of irradiated coriander, mint and lettuce at 0.5 kGy increased 2, 3 and 4 days, respectively, when compared with non-irradiated.

  15. Radiation processing of minimally processed vegetables and aromatic plants

    Science.gov (United States)

    Trigo, M. J.; Sousa, M. B.; Sapata, M. M.; Ferreira, A.; Curado, T.; Andrada, L.; Botelho, M. L.; Veloso, M. G.

    2009-07-01

    Vegetables are an essential part of people's diet all around the world. Due to cultivate techniques and handling after harvest, these products, may contain high microbial load that can cause food borne outbreaks. The irradiation of minimally processed vegetables is an efficient way to reduce the level of microorganisms and to inhibit parasites, helping a safe global trade. Evaluation of the irradiation's effects was carried out in minimal processed vegetables, as coriander ( Coriandrum sativum L .), mint ( Mentha spicata L.), parsley ( Petroselinum crispum Mill, (A.W. Hill)), lettuce ( Lactuca sativa L.) and watercress ( Nasturium officinale L.). The inactivation level of natural microbiota and the D 10 values of Escherichia coli O157:H7 and Listeria innocua in these products were determined. The physical-chemical and sensorial characteristics before and after irradiation at a range of 0.5 up to 2.0 kGy applied doses were also evaluated. No differences were verified in the overall of sensorial and physical properties after irradiation up to 1 kGy, a decrease of natural microbiota was noticed (⩾2 log). Based on the determined D10, the amount of radiation necessary to kill 10 5E. coli and L. innocua was between 0.70 and 1.55 kGy. Shelf life of irradiated coriander, mint and lettuce at 0.5 kGy increased 2, 3 and 4 days, respectively, when compared with non-irradiated.

  16. A New Adaptive Self-Tuning Fourier Coefficients Algorithm for Periodic Torque Ripple Minimization in Permanent Magnet Synchronous Motors (PMSM

    Directory of Open Access Journals (Sweden)

    Gilberto Herrera-Ruíz

    2013-03-01

    Full Text Available A New Adaptive Self-Tuning Fourier Coefficients Algorithm for Periodic Torque Ripple Minimization in Permanent Magnet Synchronous Motors (PMSM Torque ripple occurs in Permanent Magnet Synchronous Motors (PMSMs due to the non-sinusoidal flux density distribution around the air-gap and variable magnetic reluctance of the air-gap due to the stator slots distribution. These torque ripples change periodically with rotor position and are apparent as speed variations, which degrade the PMSM drive performance, particularly at low speeds, because of low inertial filtering. In this paper, a new self-tuning algorithm is developed for determining the Fourier Series Controller coefficients with the aim of reducing the torque ripple in a PMSM, thus allowing for a smoother operation. This algorithm adjusts the controller parameters based on the component’s harmonic distortion in time domain of the compensation signal. Experimental evaluation is performed on a DSP-controlled PMSM evaluation platform. Test results obtained validate the effectiveness of the proposed self-tuning algorithm, with the Fourier series expansion scheme, in reducing the torque ripple.

  17. A new adaptive self-tuning Fourier coefficients algorithm for periodic torque ripple minimization in permanent magnet synchronous motors (PMSM).

    Science.gov (United States)

    Gómez-Espinosa, Alfonso; Hernández-Guzmán, Víctor M; Bandala-Sánchez, Manuel; Jiménez-Hernández, Hugo; Rivas-Araiza, Edgar A; Rodríguez-Reséndiz, Juvenal; Herrera-Ruíz, Gilberto

    2013-03-19

    A New Adaptive Self-Tuning Fourier Coefficients Algorithm for Periodic Torque Ripple Minimization in Permanent Magnet Synchronous Motors (PMSM) Torque ripple occurs in Permanent Magnet Synchronous Motors (PMSMs) due to the non-sinusoidal flux density distribution around the air-gap and variable magnetic reluctance of the air-gap due to the stator slots distribution. These torque ripples change periodically with rotor position and are apparent as speed variations, which degrade the PMSM drive performance, particularly at low speeds, because of low inertial filtering. In this paper, a new self-tuning algorithm is developed for determining the Fourier Series Controller coefficients with the aim of reducing the torque ripple in a PMSM, thus allowing for a smoother operation. This algorithm adjusts the controller parameters based on the component's harmonic distortion in time domain of the compensation signal. Experimental evaluation is performed on a DSP-controlled PMSM evaluation platform. Test results obtained validate the effectiveness of the proposed self-tuning algorithm, with the Fourier series expansion scheme, in reducing the torque ripple.

  18. L1 and L2 sub-shell fluorescence yields for elements with 64 ≤ Z ≤ 70

    International Nuclear Information System (INIS)

    Kumar, Anil; Puri, Sanjiv

    2010-01-01

    The L 1 and L 2 sub-shell fluorescence yields have been deduced for elements with 64 ≤ Z ≤ 70 from the L k (k = l, α, β 1,4 , β 3,6 , β 2,15,9,10,7 , γ 1,5 and γ 2,3,4 ) X-ray production cross sections measured at 22.6 keV incident photon energy using a spectrometer involving a disc type radioisotope of Cd 109 as a photon source and a Peltier cooled X-ray detector. The incident photon intensity, detector efficiency and geometrical factor have been determined from the K X-ray yields emitted from elemental targets with 20 ≤ Z ≤ 42 in the same geometrical setup and from knowledge of the K shell cross sections. The present deduced ω 1 (exp) values, for elements with 64 ≤ Z ≤ 70, are found to be in good agreement with those tabulated by Campbell (J.L. Campbell, Atom. Data Nucl. Data Tables 95 (2009) 115), where as these are, on an average, higher by 19% and 24% than those based on the Dirac-Hartree-Slater model (S. Puri et al., X-ray Spectrometry 22 (1993) 358) and the semi-empirical values compiled by Krause (M.O. Krause, J. Phys. Chem. Ref. Data 8 (1979) 307), respectively. The present deduced ω 2 (exp) values are found to be in good agreement with those based on the Dirac-Hartree-Slater model and are higher by up to ∼13% than the semi-empirical values for the elements under investigation.

  19. L 1 and L 2 sub-shell fluorescence yields for elements with 64 ⩽ Z ⩽ 70

    Science.gov (United States)

    Kumar, Anil; Puri, Sanjiv

    2010-05-01

    The L 1 and L 2 sub-shell fluorescence yields have been deduced for elements with 64 ⩽ Z ⩽ 70 from the L k( k = l, α, β1,4, β3,6, β2,15,9,10,7, γ1,5 and γ2,3,4) X-ray production cross sections measured at 22.6 keV incident photon energy using a spectrometer involving a disc type radioisotope of Cd 109 as a photon source and a Peltier cooled X-ray detector. The incident photon intensity, detector efficiency and geometrical factor have been determined from the K X-ray yields emitted from elemental targets with 20 ⩽ Z ⩽ 42 in the same geometrical setup and from knowledge of the K shell cross sections. The present deduced ω1(exp) values, for elements with 64 ⩽ Z ⩽ 70, are found to be in good agreement with those tabulated by Campbell (J.L. Campbell, Atom. Data Nucl. Data Tables 95 (2009) 115), where as these are, on an average, higher by 19% and 24% than those based on the Dirac-Hartree-Slater model (S. Puri et al., X-ray Spectrometry 22 (1993) 358) and the semi-empirical values compiled by Krause (M.O. Krause, J. Phys. Chem. Ref. Data 8 (1979) 307), respectively. The present deduced ω2(exp) values are found to be in good agreement with those based on the Dirac-Hartree-Slater model and are higher by up to ˜13% than the semi-empirical values for the elements under investigation.

  20. Shelf-life extension of minimally processed and gamma irradiated red beet (Beta vulgaris ssp. vulgaris L.), Cv. early wonder

    International Nuclear Information System (INIS)

    Hernandes, Nilber Kenup; Vital, Helio de Carvalho; Coneglian, Regina Celi Cavestre

    2007-01-01

    This work investigated the effects of gamma irradiation on the shelf-life extension and safety of minimally processed red beet (Beta vulgaris ssp. vulgaris L.) by performing microbiological, chemical and sensory analyses. Red beets were harvested 73 days after transplanting and their tuberous parts were minimally processed and separated in two groups: control (non-irradiated) and irradiated (0.5, 1.0 and 1.5 kGy). Tests for Salmonella sp., total and fecal coliforms, total count of aerobic mesophilic and lactic-acid bacteria were performed during the 21-day storage at 8 deg C. They indicated that the samples irradiated with 1.0 and 1.5 kGy remained in good conditions throughout storage while the unirradiated samples did not last 7 days. Chemical analyses indicated that the concentrations of vitamins B1 and B2 were not affected by irradiation. In contrast the amounts of fructose and glucose increased during storage while the one for sucrose decreased. In addition four series of sensory evaluations including appearance and aroma indicated that the samples irradiated with 1.0 and 1.5 kGy remained good for consumption for 20 days. Therefore it was concluded that the use of the doses of 1.0 and 1.5 kGy produced the best effects on the conservation of the samples without harming the sensory characteristics and nutritional constituents tested. (author)

  1. Gauge coupling running in minimal SU(3) x SU(2) x U(1) superstring unification

    CERN Document Server

    Ibáñez, L E; Ross, Graham G

    1991-01-01

    We study the evolution of the gauge coupling constants in string unification schemes in which the light spectrum below the compactification scale is exactly that of the minimal supersymmetric standard model. In the absence of string threshold corrections the predicted values $\\sin^2\\theta _W=0.218$ and $\\alpha _s=0.20$ are in gross conflict with experiment, but these corrections are generically important. One can express the string threshold corrections to $\\sin^2\\theta _W$ and $\\alpha_s$ in terms of certain $modular$ $weights$ of quark, lepton and Higgs superfields as well as the $moduli$ of the string model. We find that in order to get agreement with the experimental measurements within the context of this $minimal$ scheme, certain constraints on the $modular$ $weights$ of the quark, lepton and Higgs superfields should be obeyed. Our analysis indicates that this $minimal$ $string$ $unification$

  2. The Aquarius Level 2 Algorithm

    Science.gov (United States)

    Meissner, T.; Wentz, F. J.; Hilburn, K. A.; Lagerloef, G. S.; Le Vine, D. M.

    2012-12-01

    The Aquarius L-band radiometer/scatterometer system is designed to provide monthly salinity maps at 150 km spatial scale to an accuracy of 0.2 psu. The sensor was launched on June 10, 2011, aboard the Argentine CONAE SAC-D spacecraft. The L-band radiometers and the scatterometer have been taking science data observations since August 25, 2011. This presentation discusses the current state of the Aquarius Level processing algorithm, which transforms radiometer counts ultimately into sea surface salinity (SSS). We focus on several topics that we have investigated since launch: 1. Updated Pointing A detailed check of the Aquarius pointing angles was performed, which consists in making adjustments of the two pointing angles, azimuth angle and off-nadir angle, for each horn. It has been found that the necessary adjustments for all 3 horns can be explained by a single offset for the antenna pointing if we introduce a constant offset in the roll angle by - 0.51 deg and the pitch angle by + 0.16 deg. 2. Antenna Patterns and Instrument Calibration In March 2012 JPL has produced a set of new antenna patterns using the GRASP software. Compared with the various pre-launch patterns those new patterns lead to an increase in the spillover coefficient by about 1%. We discuss its impact on several components of the Level 2 processing: the antenna pattern correction (APC), the correction for intrusion of galactic and solar radiation that is reflected from the ocean surface into the Aquarius field of view, and the correction of contamination from land surface radiation entering into the sidelobes. We show that the new antenna patterns result in a consistent calibration of all 3 Stokes parameters, which can be best demonstrated during spacecraft pitch maneuvers. 3. Cross Polarization Couplings of the 3rd Stokes Parameter Using the APC values for the cross polarization coupling of the 3rd Stokes parameter into the 1st and 2nd Stokes parameter lead to a spurious image of the 3rd Stokes

  3. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  4. A low-resource quantum factoring algorithm

    NARCIS (Netherlands)

    Bernstein, D.J.; Biasse, J. F.; Mosca, M.; Lange, T.; Takagi, T.

    2017-01-01

    In this paper, we present a factoring algorithm that, assuming standard heuristics, uses just (log N)2/3+o(1) qubits to factor an integer N in time Lq+o(1) where L = exp((log N)1/3 (log log N)2/3) and q =3√8/3 ≈ 1.387. For comparison, the lowest asymptotic time complexity for known pre-quantum

  5. Mathematical Optimization Algorithm for Minimizing the Cost Function of GHG Emission in AS/RS Using Positive Selection Based Clonal Selection Principle

    Science.gov (United States)

    Mahalakshmi; Murugesan, R.

    2018-04-01

    This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.

  6. Soil Moisture Active Passive Mission L4_C Data Product Assessment (Version 2 Validated Release)

    Science.gov (United States)

    Kimball, John S.; Jones, Lucas A.; Glassy, Joseph; Stavros, E. Natasha; Madani, Nima; Reichle, Rolf H.; Jackson, Thomas; Colliander, Andreas

    2016-01-01

    The SMAP satellite was successfully launched January 31st 2015, and began acquiring Earth observation data following in-orbit sensor calibration. Global data products derived from the SMAP L-band microwave measurements include Level 1 calibrated and geolocated radiometric brightness temperatures, Level 23 surface soil moisture and freezethaw geophysical retrievals mapped to a fixed Earth grid, and model enhanced Level 4 data products for surface to root zone soil moisture and terrestrial carbon (CO2) fluxes. The post-launch SMAP mission CalVal Phase had two primary objectives for each science product team: 1) calibrate, verify, and improve the performance of the science algorithms, and 2) validate accuracies of the science data products as specified in the L1 science requirements. This report provides analysis and assessment of the SMAP Level 4 Carbon (L4_C) product pertaining to the validated release. The L4_C validated product release effectively replaces an earlier L4_C beta-product release (Kimball et al. 2015). The validated release described in this report incorporates a longer data record and benefits from algorithm and CalVal refinements acquired during the SMAP post-launch CalVal intensive period. The SMAP L4_C algorithms utilize a terrestrial carbon flux model informed by SMAP soil moisture inputs along with optical remote sensing (e.g. MODIS) vegetation indices and other ancillary biophysical data to estimate global daily net ecosystem CO2 exchange (NEE) and component carbon fluxes for vegetation gross primary production (GPP) and ecosystem respiration (Reco). Other L4_C product elements include surface (10 cm depth) soil organic carbon (SOC) stocks and associated environmental constraints to these processes, including soil moisture and landscape freeze/thaw (FT) controls on GPP and respiration (Kimball et al. 2012). The L4_C product encapsulates SMAP carbon cycle science objectives by: 1) providing a direct link between terrestrial carbon fluxes and

  7. Phonetic Encoding of Coda Voicing Contrast under Different Focus Conditions in L1 vs. L2 English.

    Science.gov (United States)

    Choi, Jiyoun; Kim, Sahayng; Cho, Taehong

    2016-01-01

    This study investigated how coda voicing contrast in English would be phonetically encoded in the temporal vs. spectral dimension of the preceding vowel (in vowel duration vs. F1/F2) by Korean L2 speakers of English, and how their L2 phonetic encoding pattern would be compared to that of native English speakers. Crucially, these questions were explored by taking into account the phonetics-prosody interface, testing effects of prominence by comparing target segments in three focus conditions (phonological focus, lexical focus, and no focus). Results showed that Korean speakers utilized the temporal dimension (vowel duration) to encode coda voicing contrast, but failed to use the spectral dimension (F1/F2), reflecting their native language experience-i.e., with a more sparsely populated vowel space in Korean, they are less sensitive to small changes in the spectral dimension, and hence fine-grained spectral cues in English are not readily accessible. Results also showed that along the temporal dimension, both the L1 and L2 speakers hyperarticulated coda voicing contrast under prominence (when phonologically or lexically focused), but hypoarticulated it in the non-prominent condition. This indicates that low-level phonetic realization and high-order information structure interact in a communicatively efficient way, regardless of the speakers' native language background. The Korean speakers, however, used the temporal phonetic space differently from the way the native speakers did, especially showing less reduction in the no focus condition. This was also attributable to their native language experience-i.e., the Korean speakers' use of temporal dimension is constrained in a way that is not detrimental to the preservation of coda voicing contrast, given that they failed to add additional cues along the spectral dimension. The results imply that the L2 phonetic system can be more fully illuminated through an investigation of the phonetics-prosody interface in connection

  8. Enrichment of variations in KIR3DL1/S1 and KIR2DL2/L3 among H1N1/09 ICU patients: an exploratory study.

    Directory of Open Access Journals (Sweden)

    David La

    Full Text Available BACKGROUND: Infection by the pandemic influenza A (H1N1/09 virus resulted in significant pathology among specific ethnic groups worldwide. Natural Killer (NK cells are important in early innate immune responses to viral infections. Activation of NK cells, in part, depend on killer-cell immunoglobulin-like receptors (KIR and HLA class I ligand interactions. To study factors involved in NK cell dysfunction in overactive immune responses to H1N1 infection, KIR3DL1/S1 and KIR2DL2/L3 allotypes and cognate HLA ligands of H1N1/09 intensive-care unit (ICU patients were determined. METHODOLOGY AND FINDINGS: KIR3DL1/S1, KIR2DL2/L3, and HLA -B and -C of 51 H1N1/09 ICU patients and 105 H1N1-negative subjects (St. Theresa Point, Manitoba were characterized. We detected an increase of 3DL1 ligand-negative pairs (3DL1/S1(+ Bw6(+ Bw4(-, and a lack of 2DL1 HLA-C2 ligands, among ICU patients. They were also significantly enriched for 2DL2/L3 ligand-positive pairs (PVA, P=0.024, Pc=0.047; Odds Ratio:2.563, CI95%:1.109-5.923, 3DL1*00101 (Ab>VA, PSTh, P=0.034, Pc=0.268, and 3DL1*029 (Ab>STh, P=0.039, Pc=0.301. Aboriginal patients ligand-positive for 3DL1/S1 and 2DL1 had the lowest probabilities of death (R(d (R(d=28%, compared to patients that were 3DL1/S1 ligand-negative (R(d=52% or carried 3DL1*029 (R(d=52%. Relative to Caucasoids (CA, two allotypes were enriched among non-aboriginal ICU patients (NAb: 3DL1*00401 (NAb>CA, P<0.001, Pc<0.001 and 3DL1*01502 (CA1/S1, 2DL2/L3, and 2DL1 had the lowest probabilities of death (R(d=36%, compared to subjects with 3DL1*01502 (R(d=48% and/or 3DL1*00401 (R(d=58%. CONCLUSIONS: Specific KIR3DL1/S1 allotypes, 3DL1/S1 and 2DL1 ligand-negative pairs, and 2DL2/L3 ligand-positive pairs were enriched among ICU patients. This suggests a possible association with NK cell dysfunction in patients with overactive immune responses to H1N1/09, leading to

  9. VES/TEM 1D joint inversion by using Controlled Random Search (CRS) algorithm

    Science.gov (United States)

    Bortolozo, Cassiano Antonio; Porsani, Jorge Luís; Santos, Fernando Acácio Monteiro dos; Almeida, Emerson Rodrigo

    2015-01-01

    Electrical (DC) and Transient Electromagnetic (TEM) soundings are used in a great number of environmental, hydrological, and mining exploration studies. Usually, data interpretation is accomplished by individual 1D models resulting often in ambiguous models. This fact can be explained by the way as the two different methodologies sample the medium beneath surface. Vertical Electrical Sounding (VES) is good in marking resistive structures, while Transient Electromagnetic sounding (TEM) is very sensitive to conductive structures. Another difference is VES is better to detect shallow structures, while TEM soundings can reach deeper layers. A Matlab program for 1D joint inversion of VES and TEM soundings was developed aiming at exploring the best of both methods. The program uses CRS - Controlled Random Search - algorithm for both single and 1D joint inversions. Usually inversion programs use Marquadt type algorithms but for electrical and electromagnetic methods, these algorithms may find a local minimum or not converge. Initially, the algorithm was tested with synthetic data, and then it was used to invert experimental data from two places in Paraná sedimentary basin (Bebedouro and Pirassununga cities), both located in São Paulo State, Brazil. Geoelectric model obtained from VES and TEM data 1D joint inversion is similar to the real geological condition, and ambiguities were minimized. Results with synthetic and real data show that 1D VES/TEM joint inversion better recovers simulated models and shows a great potential in geological studies, especially in hydrogeological studies.

  10. Structural Dynamics Investigation of Human Family 1 & 2 Cystatin-Cathepsin L1 Interaction: A Comparison of Binding Modes.

    Directory of Open Access Journals (Sweden)

    Suman Kumar Nandy

    Full Text Available Cystatin superfamily is a large group of evolutionarily related proteins involved in numerous physiological activities through their inhibitory activity towards cysteine proteases. Despite sharing the same cystatin fold, and inhibiting cysteine proteases through the same tripartite edge involving highly conserved N-terminal region, L1 and L2 loop; cystatins differ widely in their inhibitory affinity towards C1 family of cysteine proteases and molecular details of these interactions are still elusive. In this study, inhibitory interactions of human family 1 & 2 cystatins with cathepsin L1 are predicted and their stability and viability are verified through protein docking & comparative molecular dynamics. An overall stabilization effect is observed in all cystatins on complex formation. Complexes are mostly dominated by van der Waals interaction but the relative participation of the conserved regions varied extensively. While van der Waals contacts prevail in L1 and L2 loop, N-terminal segment chiefly acts as electrostatic interaction site. In fact the comparative dynamics study points towards the instrumental role of L1 loop in directing the total interaction profile of the complex either towards electrostatic or van der Waals contacts. The key amino acid residues surfaced via interaction energy, hydrogen bonding and solvent accessible surface area analysis for each cystatin-cathepsin L1 complex influence the mode of binding and thus control the diverse inhibitory affinity of cystatins towards cysteine proteases.

  11. A new warfarin dosing algorithm including VKORC1 3730 G > A polymorphism: comparison with results obtained by other published algorithms.

    Science.gov (United States)

    Cini, Michela; Legnani, Cristina; Cosmi, Benilde; Guazzaloca, Giuliana; Valdrè, Lelia; Frascaro, Mirella; Palareti, Gualtiero

    2012-08-01

    Warfarin dosing is affected by clinical and genetic variants, but the contribution of the genotype associated with warfarin resistance in pharmacogenetic algorithms has not been well assessed yet. We developed a new dosing algorithm including polymorphisms associated both with warfarin sensitivity and resistance in the Italian population, and its performance was compared with those of eight previously published algorithms. Clinical and genetic data (CYP2C9*2, CYP2C9*3, VKORC1 -1639 G > A, and VKORC1 3730 G > A) were used to elaborate the new algorithm. Derivation and validation groups comprised 55 (58.2% men, mean age 69 years) and 40 (57.5% men, mean age 70 years) patients, respectively, who were on stable anticoagulation therapy for at least 3 months with different oral anticoagulation therapy (OAT) indications. Performance of the new algorithm, evaluated with mean absolute error (MAE) defined as the absolute value of the difference between observed daily maintenance dose and predicted daily dose, correlation with the observed dose and R(2) value, was comparable with or slightly lower than that obtained using the other algorithms. The new algorithm could correctly assign 53.3%, 50.0%, and 57.1% of patients to the low (≤25 mg/week), intermediate (26-44 mg/week) and high (≥ 45 mg/week) dosing range, respectively. Our data showed a significant increase in predictive accuracy among patients requiring high warfarin dose compared with the other algorithms (ranging from 0% to 28.6%). The algorithm including VKORC1 3730 G > A, associated with warfarin resistance, allowed a more accurate identification of resistant patients who require higher warfarin dosage.

  12. The Use of Paraphrase in Summary Writing: A Comparison of L1 and L2 Writers

    Science.gov (United States)

    Keck, Casey

    2006-01-01

    Paraphrasing is considered by many to be an important skill for academic writing, and some have argued that the teaching of paraphrasing might help students avoid copying from source texts. Few studies, however, have investigated the ways in which both L1 and L2 academic writers already use paraphrasing as a textual borrowing strategy when…

  13. Subband Adaptive Filtering with l1-Norm Constraint for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Young-Seok Choi

    2013-01-01

    Full Text Available This paper presents a new approach of the normalized subband adaptive filter (NSAF which directly exploits the sparsity condition of an underlying system for sparse system identification. The proposed NSAF integrates a weighted l1-norm constraint into the cost function of the NSAF algorithm. To get the optimum solution of the weighted l1-norm regularized cost function, a subgradient calculus is employed, resulting in a stochastic gradient based update recursion of the weighted l1-norm regularized NSAF. The choice of distinct weighted l1-norm regularization leads to two versions of the l1-norm regularized NSAF. Numerical results clearly indicate the superior convergence of the l1-norm regularized NSAFs over the classical NSAF especially when identifying a sparse system.

  14. L1 Differences and L2 Similarities: Teaching Verb Tenses in English

    Science.gov (United States)

    Collins, Laura

    2007-01-01

    In making decisions regarding the focus for grammar teaching, ESL instructors may take into consideration errors that appear to result from the influence of their students' first language(s) (L1). There is also evidence from language acquisition research suggesting that for some grammatical features, learners of different L1 backgrounds may face…

  15. A novel high-frequency encoding algorithm for image compression

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  16. Loss-minimal Algorithmic Trading Based on Levy Processes

    Directory of Open Access Journals (Sweden)

    Farhad Kia

    2014-08-01

    Full Text Available In this paper we optimize portfolios assuming that the value of the portfolio follows a Lévy process. First we identify the parameters of the underlying Lévy process and then portfolio optimization is performed by maximizing the probability of positive return. The method has been tested by extensive performance analysis on Forex and SP 500 historical time series. The proposed trading algorithm has achieved 4.9\\% percent yearly return on average without leverage which proves its applicability to algorithmic trading.

  17. Curcumin inhibits cholesterol uptake in Caco-2 cells by down-regulation of NPC1L1 expression

    Directory of Open Access Journals (Sweden)

    Duan Rui-Dong

    2010-04-01

    Full Text Available Abstract Background Curcumin is a polyphenol and the one of the principle curcuminoids of the spice turmeric. Its antioxidant, anti-cancer and anti-inflammatory effects have been intensively studied. Previous in vivo studies showed that administration of curcumin also decreased cholesterol levels in the blood, and the effects were considered to be related to upregulation of LDL receptor. However, since plasma cholesterol levels are also influenced by the uptake of cholesterol in the gut, which is mediated by a specific transporter Niemann-Pick Cl-like 1 (NPC1L1 protein, the present study is to investigate whether curcumin affects cholesterol uptake in the intestinal Caco-2 cells. Methods Caco-2 cells were cultured to confluence. The micelles composed of bile salt, monoolein, and 14C-cholesterol were prepared. We first incubated the cells with the micelles in the presence and absence of ezetimibe, the specific inhibitor of NPC1L1, to see whether the uptake of the cholesterol in the cells was mediated by NPC1L1. We then pretreated the cells with curcumin at different concentrations for 24 h followed by examination of the changes of cholesterol uptake in these curcumin-treated cells. Finally we determined whether curcumin affects the expression of NPC1L1 by both Western blot analysis and qPCR quantification. Results We found that the uptake of radioactive cholesterol in Caco-2 cells was inhibited by ezetimibe in a dose-dependent manner. The results indicate that the uptake of cholesterol in this study was mediated by NPC1L1. We then pretreated the cells with 25-100 μM curcumin for 24 h and found that such a treatment dose-dependently inhibited cholesterol uptake with 40% inhibition obtained by 100 μM curcumin. In addition, we found that the curcumin-induced inhibition of cholesterol uptake was associated with significant decrease in the levels of NPC1L1 protein and NPC1L1 mRNA, as analyzed by Western blot and qPCR, respectively. Conclusion

  18. Curcumin inhibits cholesterol uptake in Caco-2 cells by down-regulation of NPC1L1 expression.

    Science.gov (United States)

    Feng, Dan; Ohlsson, Lena; Duan, Rui-Dong

    2010-04-19

    Curcumin is a polyphenol and the one of the principle curcuminoids of the spice turmeric. Its antioxidant, anti-cancer and anti-inflammatory effects have been intensively studied. Previous in vivo studies showed that administration of curcumin also decreased cholesterol levels in the blood, and the effects were considered to be related to upregulation of LDL receptor. However, since plasma cholesterol levels are also influenced by the uptake of cholesterol in the gut, which is mediated by a specific transporter Niemann-Pick Cl-like 1 (NPC1L1) protein, the present study is to investigate whether curcumin affects cholesterol uptake in the intestinal Caco-2 cells. Caco-2 cells were cultured to confluence. The micelles composed of bile salt, monoolein, and 14C-cholesterol were prepared. We first incubated the cells with the micelles in the presence and absence of ezetimibe, the specific inhibitor of NPC1L1, to see whether the uptake of the cholesterol in the cells was mediated by NPC1L1. We then pretreated the cells with curcumin at different concentrations for 24 h followed by examination of the changes of cholesterol uptake in these curcumin-treated cells. Finally we determined whether curcumin affects the expression of NPC1L1 by both Western blot analysis and qPCR quantification. We found that the uptake of radioactive cholesterol in Caco-2 cells was inhibited by ezetimibe in a dose-dependent manner. The results indicate that the uptake of cholesterol in this study was mediated by NPC1L1. We then pretreated the cells with 25-100 muM curcumin for 24 h and found that such a treatment dose-dependently inhibited cholesterol uptake with 40% inhibition obtained by 100 muM curcumin. In addition, we found that the curcumin-induced inhibition of cholesterol uptake was associated with significant decrease in the levels of NPC1L1 protein and NPC1L1 mRNA, as analyzed by Western blot and qPCR, respectively. Curcumin inhibits cholesterol uptake through suppression of NPC1L1

  19. Acquiring native-like intonation in Dutch and Spanish : Comparing the L1 and L2 of native speakers and second language learners

    NARCIS (Netherlands)

    van Maastricht, L.J.; Swerts, M.G.J.; Krahmer, E.J.

    2013-01-01

    ACQUIRING NATIVE-LIKE INTONATION IN DUTCH AND SPANISH Comparing the L1 and L2 of native speakers and second language learners Introduction Learning more about the interaction between the native language (L1) and the target language (L2) has been the aim of many studies on second language acquisition

  20. La norma L1 como alternativa a la norma L2 en el ajuste de la regresión

    Directory of Open Access Journals (Sweden)

    Carlos N. Bouza

    2002-01-01

    Full Text Available En este trabajo analizamos el desarrollo y los conceptos de las normas L1 y L2 y se comparan con algunos ejemplos. Por una parte la norma L1 es óptima bajo los supuestos de que los errores tienen la distribución de Laplace. Esta fue propuesta mucho antes que el MC, pero las facilidades de cómputo de éstos últimos le dieron primacía. Actualmente va adquiriendo gran importancia para las aplicaciones económicas, dado que en general los problemas de finanzas y de series temporales incumplen con las hipótesis usadas en el teorema de Gauss-Markoff. El uso de la norma L2, es el método de universal aceptación en el ajuste de la regresión. Sin embargo su optimalidad solo es válida bajo una serie de supuestos que no se cumplen en general (Teorema de Gaus- Markoff. Por lo tanto, norma L1 aparece como una alternativa mejor que la L2 en muchas aplicaciones dada su robustez ante las observaciones atípicas.

  1. Rational approximations and quantum algorithms with postselection

    NARCIS (Netherlands)

    Mahadev, U.; de Wolf, R.

    2015-01-01

    We study the close connection between rational functions that approximate a given Boolean function, and quantum algorithms that compute the same function using post-selection. We show that the minimal degree of the former equals (up to a factor of 2) the minimal query complexity of the latter. We

  2. Banach spaces that realize minimal fillings

    International Nuclear Information System (INIS)

    Bednov, B. B.; Borodin, P. A.

    2014-01-01

    It is proved that a real Banach space realizes minimal fillings for all its finite subsets (a shortest network spanning a fixed finite subset always exists and has the minimum possible length) if and only if it is a predual of L 1 . The spaces L 1 are characterized in terms of Steiner points (medians). Bibliography: 25 titles. (paper)

  3. Association of Affect with Vertical Position in L1 but not in L2 in Unbalanced Bilinguals

    Directory of Open Access Journals (Sweden)

    Degao eLi

    2015-05-01

    Full Text Available After judging the valence of the positive (e.g., happy and the negative words (e.g., sad, the participants’ response to the letter (q or p was faster and slower, respectively, when the letter appeared at the upper end than at the lower end of the screen in Meier & Robinson’ (2004 second experiment. To compare this metaphorical association of affect with vertical position in Chinese-English bilinguals’ first language (L1 and second language (L2 (language, we conducted four experiments in an affective priming task. The targets were one set of positive or negative words (valence, which were shown vertically above or below the centre of the screen (position. The primes, presented at the centre of the screen, were affective words that were semantically related to the targets, affective words that were not semantically related to the targets, affective icon-pictures, and neutral strings in experiment 1, 2, 3, and 4, respectively. In judging the targets’ valence, the participants showed different patterns of interactions between language, valence, and position in reaction times across the experiments. We concluded that metaphorical association between affect and vertical position works in L1 but not in L2 for unbalanced bilinguals.

  4. An Approximate L p Difference Algorithm for Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Jessica H. Fong

    2001-12-01

    Full Text Available Several recent papers have shown how to approximate the difference ∑ i |a i-b i | or ∑|a i-b i | 2 between two functions, when the function values a i and b i are given in a data stream, and their order is chosen by an adversary. These algorithms use little space (much less than would be needed to store the entire stream and little time to process each item in the stream. They approximate with small relative error. Using different techniques, we show how to approximate the L p-difference ∑ i |a i-b i | p for any rational-valued p∈(0,2], with comparable efficiency and error. We also show how to approximate ∑ i |a i-b i | p for larger values of p but with a worse error guarantee. Our results fill in gaps left by recent work, by providing an algorithm that is precisely tunable for the application at hand. These results can be used to assess the difference between two chronologically or physically separated massive data sets, making one quick pass over each data set, without buffering the data or requiring the data source to pause. For example, one can use our techniques to judge whether the traffic on two remote network routers are similar without requiring either router to transmit a copy of its traffic. A web search engine could use such algorithms to construct a library of small ``sketches,'' one for each distinct page on the web; one can approximate the extent to which new web pages duplicate old ones by comparing the sketches of the web pages. Such techniques will become increasingly important as the enormous scale, distributional nature, and one-pass processing requirements of data sets become more commonplace.

  5. Combination therapy with 1,3-bis(2-chloroethyl)-1-nitrosourea and low dose rate radiation in the 9L rat brain tumor and spheroid models: implications for brain tumor brachytherapy

    International Nuclear Information System (INIS)

    Gutin, P.H.; Bernstein, M.; Sano, Y.; Deen, D.F.

    1984-01-01

    The effects of combination treatment with 1,3-bis(2-chloroethyl)-1-nitrosourea (BCNU) and low dose rate radiation were studied in the 9L rat brain tumor in vivo model and the 9L multicellular tumor spheroid model. F-344 rats bearing intracerebral 9L gliosarcomas were implanted with removable 125 I sources. Minimal (peripheral) tumor doses of 6387 rad produced an increased life-span (ILS) of 28% over that of control rats implanted with dummy sources, BCNU alone (13.3 mg/kg) produced in an ILS of 67%, and combination treatment with BCNU and implanted 125 I sources produced an ILS of 167%. As measured by a colony-forming efficiency assay, the greatest cell kill in 9L spheroids occurred when BCNU was administered 24 hours before irradiation from a 137 Cs source at a low dose rate of 5 rad/minute. At a higher dose rate of 210 rad/minute, the time dependence of the effects of combination treatment was identical and therefore independent of dose rate

  6. An algorithm for improving the quality of structural images of turbid media in endoscopic optical coherence tomography

    Science.gov (United States)

    Potlov, A. Yu.; Frolov, S. V.; Proskurin, S. G.

    2018-04-01

    High-quality OCT structural images reconstruction algorithm for endoscopic optical coherence tomography of biological tissue is described. The key features of the presented algorithm are: (1) raster scanning and averaging of adjacent Ascans and pixels; (2) speckle level minimization. The described algorithm can be used in the gastroenterology, urology, gynecology, otorhinolaryngology for mucous membranes and skin diagnostics in vivo and in situ.

  7. Experiment prediction for LOFT nuclear experiments L5-1/L8-2

    International Nuclear Information System (INIS)

    Chen, T.H.; Modro, S.M.

    1982-01-01

    The LOFT Experiments L5-1 and L8-2 simulated intermediate break loss-of-coolant accidents with core uncovery. This paper compares the predictions with the measured data for these experiments. The RELAP5 code was used to perform best estimate double-blind and single-blind predictions. The double-blind calculations are performed prior to the experiment and use specified nominal initial and boundary conditions. The single-blind calculations are performed after the experiment and use measured initial and boundary conditions while maintaining all other parameters constant, including the code version. Comparisons of calculated results with experimental results are discussed; the possible causes of discrepancies are explored and explained. RELAP5 calculated system pressure, mass inventory, and fuel cladding temperature agree reasonably well with the experiment results, and only slight changes are noted between the double-blind and single-blind predictions

  8. Application of response surface methodology (RSM) and genetic algorithm in minimizing warpage on side arm

    Science.gov (United States)

    Raimee, N. A.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.

    2017-09-01

    The plastic injection moulding process produces large numbers of parts of high quality with great accuracy and quickly. It has widely used for production of plastic part with various shapes and geometries. Side arm is one of the product using injection moulding to manufacture it. However, there are some difficulties in adjusting the parameter variables which are mould temperature, melt temperature, packing pressure, packing time and cooling time as there are warpage happen at the tip part of side arm. Therefore, the work reported herein is about minimizing warpage on side arm product by optimizing the process parameter using Response Surface Methodology (RSM) and with additional artificial intelligence (AI) method which is Genetic Algorithm (GA).

  9. Radiation processing of minimally processed vegetables and aromatic plants

    Energy Technology Data Exchange (ETDEWEB)

    Trigo, M.J. [Instituto Nacional dos Recursos Biologicos, L-INIA, Quinta do Marques, 2784-505 Oeiras (Portugal)], E-mail: mjptrigo@gmail.com; Sousa, M.B.; Sapata, M.M.; Ferreira, A.; Curado, T.; Andrada, L. [Instituto Nacional dos Recursos Biologicos, L-INIA, Quinta do Marques, 2784-505 Oeiras (Portugal); Botelho, M.L. [Instituto Tecnologico e Nuclear, E.N. 10, 2696 Sacavem (Portugal); Veloso, M.G. [Faculdade de Medicina Veterinaria de Lisboa, Av. da Universidade Tecnica, Alto da Ajuda, 1300-477 Lisboa (Portugal)

    2009-07-15

    Vegetables are an essential part of people's diet all around the world. Due to cultivate techniques and handling after harvest, these products, may contain high microbial load that can cause food borne outbreaks. The irradiation of minimally processed vegetables is an efficient way to reduce the level of microorganisms and to inhibit parasites, helping a safe global trade. Evaluation of the irradiation's effects was carried out in minimal processed vegetables, as coriander (Coriandrum sativum L.), mint (Mentha spicata L.), parsley (Petroselinum crispum Mill, (A.W. Hill)), lettuce (Lactuca sativa L.) and watercress (Nasturium officinale L.). The inactivation level of natural microbiota and the D{sub 10} values of Escherichia coli O157:H7 and Listeria innocua in these products were determined. The physical-chemical and sensorial characteristics before and after irradiation at a range of 0.5 up to 2.0 kGy applied doses were also evaluated. No differences were verified in the overall of sensorial and physical properties after irradiation up to 1 kGy, a decrease of natural microbiota was noticed ({>=}2 log). Based on the determined D{sub 10}, the amount of radiation necessary to kill 10{sup 5}E. coli and L. innocua was between 0.70 and 1.55 kGy. Shelf life of irradiated coriander, mint and lettuce at 0.5 kGy increased 2, 3 and 4 days, respectively, when compared with non-irradiated.

  10. An algorithm for identification and classification of individuals with type 1 and type 2 diabetes mellitus in a large primary care database

    Directory of Open Access Journals (Sweden)

    Sharma M

    2016-10-01

    Full Text Available Manuj Sharma,1 Irene Petersen,1,2 Irwin Nazareth,1 Sonia J Coton,1 1Department of Primary Care and Population Health, University College London, London, UK; 2Department of Clinical Epidemiology, Aarhus University, Aarhus, Denmark Background: Research into diabetes mellitus (DM often requires a reproducible method for identifying and distinguishing individuals with type 1 DM (T1DM and type 2 DM (T2DM.  Objectives: To develop a method to identify individuals with T1DM and T2DM using UK primary care electronic health records.  Methods: Using data from The Health Improvement Network primary care database, we developed a two-step algorithm. The first algorithm step identified individuals with potential T1DM or T2DM based on diagnostic records, treatment, and clinical test results. We excluded individuals with records for rarer DM subtypes only. For individuals to be considered diabetic, they needed to have at least two records indicative of DM; one of which was required to be a diagnostic record. We then classified individuals with T1DM and T2DM using the second algorithm step. A combination of diagnostic codes, medication prescribed, age at diagnosis, and whether the case was incident or prevalent were used in this process. We internally validated this classification algorithm through comparison against an independent clinical examination of The Health Improvement Network electronic health records for a random sample of 500 DM individuals.  Results: Out of 9,161,866 individuals aged 0–99 years from 2000 to 2014, we classified 37,693 individuals with T1DM and 418,433 with T2DM, while 1,792 individuals remained unclassified. A small proportion were classified with some uncertainty (1,155 [3.1%] of all individuals with T1DM and 6,139 [1.5%] with T2DM due to unclear health records. During validation, manual assignment of DM type based on clinical assessment of the entire electronic record and algorithmic assignment led to equivalent classification

  11. l-2-Nitrimino-1,3-diazepane-4-carboxylic acid

    Directory of Open Access Journals (Sweden)

    Harutyun A. Karapetyan

    2008-05-01

    Full Text Available The cyclic form of l-nitroarginine, C6H10N4O4, crystallizes with two independent molecules in the asymmetric unit. According to the geometrical parameters, similar in both molecules, the structure corresponds to that of l-2-nitrimino-1,3-diazepane-4-carboxylic acid; there are, however, conformational differences between the independent molecules, one of them being close to a twisted chair while the other might be described as a rather flattened boat. All six active H atoms in the two molecules are involved in hydrogen bonds, two of which are intramolecular and four intermolecular, forming an infinite chain of molecules along the b axis.

  12. Semantic Categorization of Placement Verbs in L1 and L2 Danish and Spanish

    Science.gov (United States)

    Cadierno, Teresa; Ibarretxe-Antuñano, Iraide; Hijazo-Gascón, Alberto

    2016-01-01

    This study investigates semantic categorization of the meaning of placement verbs by Danish and Spanish native speakers and two groups of intermediate second language (L2) learners (Danish learners of L2 Spanish and Spanish learners of L2 Danish). Participants described 31 video clips picturing different types of placement events. Cluster analyses…

  13. Listeria monocytogenes serovar 4a is a possible evolutionary intermediate between L. monocytogenes serovars 1/2a and 4b and L. innocua.

    Science.gov (United States)

    Chen, Jianshun; Jiang, Lingli; Chen, Xueyan; Luo, Xiaokai; Chen, Yang; Yu, Ying; Tian, Guoming; Liu, Dongyou; Fang, Weihuan

    2009-03-01

    The genus Listeria consists of six closely related species and forms three phylogenetic groups: L. monocytogenes- L. innocua, L. ivanovii-L. seeligeri-L. welshimeri, and L. grayi. In this report, we attempted to examine the evolutionary relationship in the L. monocytogenes-L. innocua group by probing the nucleotide sequences of 23S rRNA and 16S rRNA, and the gene clusters lmo0029-lmo0042, ascBdapE, rplS-infC, and prs-ldh in L. monocytogenes serovars 1/2a, 4a, and 4b, and L. innocua. Additionally, we assessed the status of L. monocytogenes-specific inlA and inlB genes and 10 L. innocua-specific genes in these species/serovars, together with phenotypic characterization by using in vivo and in vitro procedures. The results indicate that L. monocytogenes serovar 4a strains are genetically similar to L. innocua in the lmo0035-lmo0042, ascB-dapE, and rplS-infC regions and also possess L. innocua-specific genes lin0372 and lin1073. Furthermore, both L. monocytogenes serovar 4a and L. innocua exhibit impaired intercellular spread ability and negligible pathogenicity in mouse model. On the other hand, despite resembling L. monocytogenes serovars 1/2a and 4b in having a nearly identical virulence gene cluster, and inlA and inlB genes, these serovar 4a strains differ from serovars 1/2a and 4b by harboring notably altered actA and plcB genes, displaying strong phospholipase activity and subdued in vivo and in vitro virulence. Thus, by possessing many genes common to L. monocytogenes serovars 1/2a and 4b, and sharing many similar gene deletions with L. innocua, L. monocytogenes serovar 4a represents a possible evolutionary intermediate between L. monocytogenes serovars 1/2a and 4b and L. innocua.

  14. 10 K gate I(2)L and 1 K component analog compatible bipolar VLSI technology - HIT-2

    Science.gov (United States)

    Washio, K.; Watanabe, T.; Okabe, T.; Horie, N.

    1985-02-01

    An advanced analog/digital bipolar VLSI technology that combines on the same chip 2-ns 10 K I(2)L gates with 1 K analog devices is proposed. The new technology, called high-density integration technology-2, is based on a new structure concept that consists of three major techniques: shallow grooved-isolation, I(2)L active layer etching, and I(2)L current gain increase. I(2)L circuits with 80-MHz maximum toggle frequency have developed compatibly with n-p-n transistors having a BV(CE0) of more than 10 V and an f(T) of 5 GHz, and lateral p-n-p transistors having an f(T) of 150 MHz.

  15. Can the BMS Algorithm Decode Up to \\lfloor \\frac{d_G-g-1}{2}\\rfloor Errors? Yes, but with Some Additional Remarks

    Science.gov (United States)

    Sakata, Shojiro; Fujisawa, Masaya

    It is a well-known fact [7], [9] that the BMS algorithm with majority voting can decode up to half the Feng-Rao designed distance dFR. Since dFR is not smaller than the Goppa designed distance dG, that algorithm can correct up to \\lfloor \\frac{d_G-1}{2}\\rfloor errors. On the other hand, it has been considered to be evident that the original BMS algorithm (without voting) [1], [2] can correct up to \\lfloor \\frac{d_G-g-1}{2}\\rfloor errors similarly to the basic algorithm by Skorobogatov-Vladut. But, is it true? In this short paper, we show that it is true, although we need a few remarks and some additional procedures for determining the Groebner basis of the error locator ideal exactly. In fact, as the basic algorithm gives a set of polynomials whose zero set contains the error locators as a subset, it cannot always give the exact error locators, unless the syndrome equation is solved to find the error values in addition.

  16. A reweighted ℓ1-minimization based compressed sensing for the spectral estimation of heart rate variability using the unevenly sampled data.

    Directory of Open Access Journals (Sweden)

    Szi-Wen Chen

    Full Text Available In this paper, a reweighted ℓ1-minimization based Compressed Sensing (CS algorithm incorporating the Integral Pulse Frequency Modulation (IPFM model for spectral estimation of HRV is introduced. Knowing as a novel sensing/sampling paradigm, the theory of CS asserts certain signals that are considered sparse or compressible can be possibly reconstructed from substantially fewer measurements than those required by traditional methods. Our study aims to employ a novel reweighted ℓ1-minimization CS method for deriving the spectrum of the modulating signal of IPFM model from incomplete RR measurements for HRV assessments. To evaluate the performance of HRV spectral estimation, a quantitative measure, referred to as the Percent Error Power (PEP that measures the percentage of difference between the true spectrum and the spectrum derived from the incomplete RR dataset, was used. We studied the performance of spectral reconstruction from incomplete simulated and real HRV signals by experimentally truncating a number of RR data accordingly in the top portion, in the bottom portion, and in a random order from the original RR column vector. As a result, for up to 20% data truncation/loss the proposed reweighted ℓ1-minimization CS method produced, on average, 2.34%, 2.27%, and 4.55% PEP in the top, bottom, and random data-truncation cases, respectively, on Autoregressive (AR model derived simulated HRV signals. Similarly, for up to 20% data loss the proposed method produced 5.15%, 4.33%, and 0.39% PEP in the top, bottom, and random data-truncation cases, respectively, on a real HRV database drawn from PhysioNet. Moreover, results generated by a number of intensive numerical experiments all indicated that the reweighted ℓ1-minimization CS method always achieved the most accurate and high-fidelity HRV spectral estimates in every aspect, compared with the ℓ1-minimization based method and Lomb's method used for estimating the spectrum of HRV from

  17. A faster 1.375-approximation algorithm for sorting by transpositions.

    Science.gov (United States)

    Cunha, Luís Felipe I; Kowada, Luis Antonio B; Hausen, Rodrigo de A; de Figueiredo, Celina M H

    2015-11-01

    Sorting by Transpositions is an NP-hard problem for which several polynomial-time approximation algorithms have been developed. Hartman and Shamir (2006) developed a 1.5-approximation [Formula: see text] algorithm, whose running time was improved to O(nlogn) by Feng and Zhu (2007) with a data structure they defined, the permutation tree. Elias and Hartman (2006) developed a 1.375-approximation O(n(2)) algorithm, and Firoz et al. (2011) claimed an improvement to the running time, from O(n(2)) to O(nlogn), by using the permutation tree. We provide counter-examples to the correctness of Firoz et al.'s strategy, showing that it is not possible to reach a component by sufficient extensions using the method proposed by them. In addition, we propose a 1.375-approximation algorithm, modifying Elias and Hartman's approach with the use of permutation trees and achieving O(nlogn) time.

  18. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo

    2010-06-22

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.

  19. PD-L2 Regulates B-1 Cell Antibody Production against Phosphorylcholine through an IL-5-Dependent Mechanism.

    Science.gov (United States)

    McKay, Jerome T; Haro, Marcela A; Daly, Christina A; Yammani, Rama D; Pang, Bing; Swords, W Edward; Haas, Karen M

    2017-09-15

    B-1 cells produce natural Abs which provide an integral first line of defense against pathogens while also performing important homeostatic housekeeping functions. In this study, we demonstrate that programmed cell death 1 ligand 2 (PD-L2) regulates the production of natural Abs against phosphorylcholine (PC). Naive PD-L2-deficient (PD-L2 -/- ) mice produced significantly more PC-reactive IgM and IgA. This afforded PD-L2 -/- mice with selectively enhanced protection against PC-expressing nontypeable Haemophilus influenzae , but not PC-negative nontypeable Haemophilus influenzae , relative to wild-type mice. PD-L2 -/- mice had significantly increased PC-specific CD138 + splenic plasmablasts bearing a B-1a phenotype, and produced PC-reactive Abs largely of the T15 Id. Importantly, PC-reactive B-1 cells expressed PD-L2 and irradiated chimeras demonstrated that B cell-intrinsic PD-L2 expression regulated PC-specific Ab production. In addition to increased PC-specific IgM, naive PD-L2 -/- mice and irradiated chimeras reconstituted with PD-L2 -/- B cells had significantly higher levels of IL-5, a potent stimulator of B-1 cell Ab production. PD-L2 mAb blockade of wild-type B-1 cells in culture significantly increased CD138 and Blimp1 expression and PC-specific IgM, but did not affect proliferation. PD-L2 mAb blockade significantly increased IL-5 + T cells in culture. Both IL-5 neutralization and STAT5 inhibition blunted the effects of PD-L2 mAb blockade on B-1 cells. Thus, B-1 cell-intrinsic PD-L2 expression inhibits IL-5 production by T cells and thereby limits natural Ab production by B-1 cells. These findings have broad implications for the development of therapeutic strategies aimed at altering natural Ab levels critical for protection against infectious disease, autoimmunity, allergy, cancer, and atherosclerosis. Copyright © 2017 by The American Association of Immunologists, Inc.

  20. Taxonomic structure of the yeasts and lactic acid bacteria microbiota of pineapple (Ananas comosus L. Merr.) and use of autochthonous starters for minimally processing.

    Science.gov (United States)

    Di Cagno, Raffaella; Cardinali, Gainluigi; Minervini, Giovanna; Antonielli, Livio; Rizzello, Carlo Giuseppe; Ricciuti, Patrizia; Gobbetti, Marco

    2010-05-01

    Pichia guilliermondii was the only identified yeast in pineapple fruits. Lactobacillus plantarum and Lactobacillus rossiae were the main identified species of lactic acid bacteria. Typing of lactic acid bacteria differentiated isolates depending on the layers. L. plantarum 1OR12 and L. rossiae 2MR10 were selected within the lactic acid bacteria isolates based on the kinetics of growth and acidification. Five technological options, including minimal processing, were considered for pineapple: heating at 72 degrees C for 15 s (HP); spontaneous fermentation without (FP) or followed by heating (FHP), and fermentation by selected autochthonous L. plantarum 1OR12 and L. rossiae 2MR10 without (SP) or preceded by heating (HSP). After 30 days of storage at 4 degrees C, HSP and SP had a number of lactic acid bacteria 1000 to 1,000,000 times higher than the other processed pineapples. The number of yeasts was the lowest in HSP and SP. The Community Level Catabolic Profiles of processed pineapples indirectly confirmed the capacity of autochthonous starters to dominate during fermentation. HSP and SP also showed the highest antioxidant activity and firmness, the better preservation of the natural colours and were preferred for odour and overall acceptability. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  1. Higgs phenomenology in the minimal S U (3 )L×U (1 )X model

    Science.gov (United States)

    Okada, Hiroshi; Okada, Nobuchika; Orikasa, Yuta; Yagyu, Kei

    2016-07-01

    We investigate the phenomenology of a model based on the S U (3 )c×S U (3 )L×U (1 )X gauge theory, the so-called 331 model. In particular, we focus on the Higgs sector of the model which is composed of three S U (3 )L triplet Higgs fields and is the minimal form for realizing a phenomenologically acceptable scenario. After the spontaneous symmetry breaking S U (3 )L×U (1 )X→S U (2 )L×U (1 )Y , our Higgs sector effectively becomes that with two S U (2 )L doublet scalar fields, in which the first- and the second-generation quarks couple to a different Higgs doublet from that which couples to the third-generation quarks. This structure causes the flavor-changing neutral current mediated by Higgs bosons at the tree level. By taking an alignment limit of the mass matrix for the C P -even Higgs bosons, which is naturally realized in the case with the breaking scale of S U (3 )L×U (1 )X much larger than that of S U (2 )L×U (1 )Y, we can avoid current constraints from flavor experiments such as the B0-B¯ 0 mixing even for the Higgs bosons masses that are O (100 ) GeV . In this allowed parameter space, we clarify that a characteristic deviation in quark Yukawa couplings of the Standard Model-like Higgs boson is predicted, which has a different pattern from that seen in two Higgs doublet models with a softly broken Z2 symmetry. We also find that the flavor-violating decay modes of the extra Higgs boson, e.g., H /A →t c and H±→t s , can be dominant, and they yield the important signature to distinguish our model from the two Higgs doublet models.

  2. Minimal canonical comprehensive Gröbner systems

    OpenAIRE

    Manubens, Montserrat; Montes, Antonio

    2009-01-01

    This is the continuation of Montes' paper "On the canonical discussion of polynomial systems with parameters''. In this paper, we define the Minimal Canonical Comprehensive Gröbner System of a parametric ideal and fix under which hypothesis it exists and is computable. An algorithm to obtain a canonical description of the segments of the Minimal Canonical CGS is given, thus completing the whole MCCGS algorithm (implemented in Maple and Singular). We show its high utility for applications, suc...

  3. Real-time minimal-bit-error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  4. Real-time minimal bit error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  5. Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation

    International Nuclear Information System (INIS)

    Takeda, Koujin; Kabashima, Yoshiyuki

    2013-01-01

    We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach

  6. L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.

    Science.gov (United States)

    Zheng, Wenming; Lin, Zhouchen; Wang, Haixian

    2014-04-01

    A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.

  7. Getting Things Done in the L1 and L2: Bilingual Immigrant Women's Use of Communication Strategies in Entrepreneurial Contexts

    Science.gov (United States)

    Collier, Shartriya

    2010-01-01

    The article examines the communication strategies of four bilingual, immigrant women entrepreneurs within the context of their businesses. The analysis revealed that L1 and L2 use is crucial to the business success of the participants. L1 conversations consisted of largely private speech and directives. The women positioned themselves as…

  8. Embryonic stem cell self-renewal pathways converge on the transcription factor Tfcp2l1

    Science.gov (United States)

    Ye, Shoudong; Li, Ping; Tong, Chang; Ying, Qi-Long

    2013-01-01

    Mouse embryonic stem cell (mESC) self-renewal can be maintained by activation of the leukaemia inhibitory factor (LIF)/signal transducer and activator of transcription 3 (Stat3) signalling pathway or dual inhibition (2i) of glycogen synthase kinase 3 (Gsk3) and mitogen-activated protein kinase kinase (MEK). Several downstream targets of the pathways involved have been identified that when individually overexpressed can partially support self-renewal. However, none of these targets is shared among the involved pathways. Here, we show that the CP2 family transcription factor Tfcp2l1 is a common target in LIF/Stat3- and 2i-mediated self-renewal, and forced expression of Tfcp2l1 can recapitulate the self-renewal-promoting effect of LIF or either of the 2i components. In addition, Tfcp2l1 can reprogram post-implantation epiblast stem cells to naïve pluripotent ESCs. Tfcp2l1 upregulates Nanog expression and promotes self-renewal in a Nanog-dependent manner. We conclude that Tfcp2l1 is at the intersection of LIF- and 2i-mediated self-renewal pathways and plays a critical role in maintaining ESC identity. Our study provides an expanded understanding of the current model of ground-state pluripotency. PMID:23942238

  9. Learning to perceive and recognize a second language: the L2LP model revised.

    Science.gov (United States)

    van Leussen, Jan-Willem; Escudero, Paola

    2015-01-01

    We present a test of a revised version of the Second Language Linguistic Perception (L2LP) model, a computational model of the acquisition of second language (L2) speech perception and recognition. The model draws on phonetic, phonological, and psycholinguistic constructs to explain a number of L2 learning scenarios. However, a recent computational implementation failed to validate a theoretical proposal for a learning scenario where the L2 has less phonemic categories than the native language (L1) along a given acoustic continuum. According to the L2LP, learners faced with this learning scenario must not only shift their old L1 phoneme boundaries but also reduce the number of categories employed in perception. Our proposed revision to L2LP successfully accounts for this updating in the number of perceptual categories as a process driven by the meaning of lexical items, rather than by the learners' awareness of the number and type of phonemes that are relevant in their new language, as the previous version of L2LP assumed. Results of our simulations show that meaning-driven learning correctly predicts the developmental path of L2 phoneme perception seen in empirical studies. Additionally, and to contribute to a long-standing debate in psycholinguistics, we test two versions of the model, with the stages of phonemic perception and lexical recognition being either sequential or interactive. Both versions succeed in learning to recognize minimal pairs in the new L2, but make diverging predictions on learners' resulting phonological representations. In sum, the proposed revision to the L2LP model contributes to our understanding of L2 acquisition, with implications for speech processing in general.

  10. Minimally Invasive Sacroiliac Joint Fusion Using a Novel Hydroxyapatite-Coated Screw: Preliminary 1-Year Clinical and Radiographic Results of a 2-Year Prospective Study.

    Science.gov (United States)

    Rappoport, Louis H; Luna, Ingrid Y; Joshua, Gita

    2017-05-01

    Proper diagnosis and treatment of sacroiliac joint (SIJ) pain remains a clinical challenge. Dysfunction of the SIJ can produce pain in the lower back, buttocks, and extremities. Triangular titanium implants for minimally invasive surgical arthrodesis have been available for several years, with reputed high levels of success and patient satisfaction. This study reports on a novel hydroxyapatite-coated screw for surgical treatment of SIJ pain. Data were prospectively collected on 32 consecutive patients who underwent minimally invasive SIJ fusion with a novel hydroxyapatite-coated screw. Clinical assessments and radiographs were collected and evaluated at 3, 6, and 12 months postoperatively. Mean (standard deviation) patient age was 55.2 ± 10.7 years, and 62.5% were female. More patients (53.1%) underwent left versus right SIJ treatment, mean operative time was 42.6 ± 20.4 minutes, and estimated blood loss did not exceed 50 mL. Overnight hospital stay was required for 84% of patients, and the remaining patients needed a 2-day stay (16%). Mean preoperative visual analog scale back and leg pain scores decreased significantly by 12 months postoperatively (P sacroiliac joint pain. Future clinical studies with larger samples are warranted to assess long-term patient outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Laccase-Functionalized Graphene Oxide Assemblies as Efficient Nanobiocatalysts for Oxidation Reactions

    NARCIS (Netherlands)

    Patila, Michaela; Kouloumpis, Antonios; Gournis, Dimitrios; Rudolf, Petra; Stamatis, Haralambos

    Multi-layer graphene oxide-enzyme nanoassemblies were prepared through the multi-point covalent immobilization of laccase from Trametes versicolor (TvL) on functionalized graphene oxide (fGO). The catalytic properties of the fGO-TvL nanoassemblies were found to depend on the number of the graphene

  12. RELATIONSHIP AMONG BRAIN HEMISPHERIC DOMINANCE, ATTITUDE TOWARDS L1 AND L2, GENDER, AND LEARNING SUPRASEGMENTAL FEATURES

    Directory of Open Access Journals (Sweden)

    Mohammad Hadi Mahmoodi

    2016-07-01

    Full Text Available Oral skills are important components of language competence. To have good and acceptable listening and speaking, one must have good pronunciation, which encompasses segmental and suprasegmental features. Despite extensive studies on the role of segmental features and related issues in listening and speaking, there is paucity of research on the role of suprasegmental features in the same domain. Conducting studies which aim at shedding light on the issues related to learning suprasegmental features can help language teachers and learners in the process of teaching/learning English as a foreign language. To this end, this study was designed to investigate the relationship among brain hemispheric dominance, gender, attitudes towards L1 and L2, and learning suprasegmental features in Iranian EFL learners. First, 200 Intermediate EFL learners were selected from different English language teaching institutes in Hamedan and Isfahan, two provinces in Iran, as the sample. Prior to the main stage of the study, Oxford Placement Test (OPT was used to homogenize the proficiency level of all the participants. Then, the participants were asked to complete the Edinburgh Handedness Questionnaire to determine their dominant hemisphere. They were also required to answer two questionnaires regarding their attitudes towards L1 and L2. Finally, the participants took suprasegmental features test. The results of the independent samples t-tests indicated left-brained language learners’ superiority in observing and learning suprasegmental features. It was also found that females are better than males in producing suprasegmental features. Furthermore, the results of Pearson Product Moment Correlations indicated that there is significant relationship between attitude towards L2 and learning suprasegmental features. However, no significant relationship was found between attitude towards L1 and learning English suprasegmental features. The findings of this study can

  13. Arctic lead detection using a waveform mixture algorithm from CryoSat-2 data

    Science.gov (United States)

    Lee, Sanggyun; Kim, Hyun-cheol; Im, Jungho

    2018-05-01

    We propose a waveform mixture algorithm to detect leads from CryoSat-2 data, which is novel and different from the existing threshold-based lead detection methods. The waveform mixture algorithm adopts the concept of spectral mixture analysis, which is widely used in the field of hyperspectral image analysis. This lead detection method was evaluated with high-resolution (250 m) MODIS images and showed comparable and promising performance in detecting leads when compared to the previous methods. The robustness of the proposed approach also lies in the fact that it does not require the rescaling of parameters (i.e., stack standard deviation, stack skewness, stack kurtosis, pulse peakiness, and backscatter σ0), as it directly uses L1B waveform data, unlike the existing threshold-based methods. Monthly lead fraction maps were produced by the waveform mixture algorithm, which shows interannual variability of recent sea ice cover during 2011-2016, excluding the summer season (i.e., June to September). We also compared the lead fraction maps to other lead fraction maps generated from previously published data sets, resulting in similar spatiotemporal patterns.

  14. Minimizing shell-and-tube heat exchanger cost with genetic algorithms and considering maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Wildi-Tremblay, P.; Gosselin, L. [Universite Laval, Quebec (Canada). Dept. de genie mecanique

    2007-07-15

    This paper presents a procedure for minimizing the cost of a shell-and-tube heat exchanger based on genetic algorithms (GA). The global cost includes the operating cost (pumping power) and the initial cost expressed in terms of annuities. Eleven design variables associated with shell-and-tube heat exchanger geometries are considered: tube pitch, tube layout patterns, number of tube passes, baffle spacing at the centre, baffle spacing at the inlet and outlet, baffle cut, tube-to-baffle diametrical clearance, shell-to-baffle diametrical clearance, tube bundle outer diameter, shell diameter, and tube outer diameter. Evaluations of the heat exchangers performances are based on an adapted version of the Bell-Delaware method. Pressure drops constraints are included in the procedure. Reliability and maintenance due to fouling are taken into account by restraining the coefficient of increase of surface into a given interval. Two case studies are presented. Results show that the procedure can properly and rapidly identify the optimal design for a specified heat transfer process. (author)

  15. TCF7L2 Genetic Variants Contribute to Phenotypic Heterogeneity of Type 1 Diabetes.

    Science.gov (United States)

    Redondo, Maria J; Geyer, Susan; Steck, Andrea K; Sosenko, Jay; Anderson, Mark; Antinozzi, Peter; Michels, Aaron; Wentworth, John; Xu, Ping; Pugliese, Alberto

    2018-02-01

    The phenotypic diversity of type 1 diabetes suggests heterogeneous etiopathogenesis. We investigated the relationship of type 2 diabetes-associated transcription factor 7 like 2 ( TCF7L2 ) single nucleotide polymorphisms (SNPs) with immunologic and metabolic characteristics at type 1 diabetes diagnosis. We studied TrialNet participants with newly diagnosed autoimmune type 1 diabetes with available TCF7L2 rs4506565 and rs7901695 SNP data ( n = 810; median age 13.6 years; range 3.3-58.6). We modeled the influence of carrying a TCF7L2 variant (i.e., having 1 or 2 minor alleles) on the number of islet autoantibodies and oral glucose tolerance test (OGTT)-stimulated C-peptide and glucose measures at diabetes diagnosis. All analyses were adjusted for known confounders. The rs4506565 variant was a significant independent factor of expressing a single autoantibody, instead of multiple autoantibodies, at diagnosis (odds ratio [OR] 1.66 [95% CI 1.07, 2.57], P = 0.024). Interaction analysis demonstrated that this association was only significant in participants ≥12 years old ( n = 504; OR 2.12 [1.29, 3.47], P = 0.003) but not younger ones ( n = 306, P = 0.73). The rs4506565 variant was independently associated with higher C-peptide area under the curve (AUC) ( P = 0.008) and lower mean glucose AUC ( P = 0.0127). The results were similar for the rs7901695 SNP. In this cohort of individuals with new-onset type 1 diabetes, type 2 diabetes-linked TCF7L2 variants were associated with single autoantibody (among those ≥12 years old), higher C-peptide AUC, and lower glucose AUC levels during an OGTT. Thus, carriers of the TCF7L2 variant had a milder immunologic and metabolic phenotype at type 1 diabetes diagnosis, which could be partly driven by type 2 diabetes-like pathogenic mechanisms. © 2017 by the American Diabetes Association.

  16. Using a Shared L1 to Reduce Cognitive Overload and Anxiety Levels in the L2 Classroom

    Science.gov (United States)

    Bruen, Jennifer; Kelly, Niamh

    2017-01-01

    This paper considers the attitudes and behaviours of university language lecturers and their students regarding the use of the L1 in the higher education L2 classroom. A case study of one Irish higher education institution was carried out and qualitative interviews conducted with six lecturers in Japanese and six in German. The results indicated…

  17. Context affects L1 but not L2 during bilingual word recognition: an MEG study.

    Science.gov (United States)

    Pellikka, Janne; Helenius, Päivi; Mäkelä, Jyrki P; Lehtonen, Minna

    2015-03-01

    How do bilinguals manage the activation levels of the two languages and prevent interference from the irrelevant language? Using magnetoencephalography, we studied the effect of context on the activation levels of languages by manipulating the composition of word lists (the probability of the languages) presented auditorily to late Finnish-English bilinguals. We first determined the upper limit time-window for semantic access, and then focused on the preceding responses during which the actual word recognition processes were assumedly ongoing. Between 300 and 500 ms in the temporal cortices (in the N400 m response) we found an asymmetric language switching effect: the responses to L1 Finnish words were affected by the presentation context unlike the responses to L2 English words. This finding suggests that the stronger language is suppressed in an L2 context, supporting models that allow auditory word recognition to be affected by contextual factors and the language system to be subject to inhibitory influence. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. L(2, 1-Labelings of Some Families of Oriented Planar Graphs

    Directory of Open Access Journals (Sweden)

    Sen Sagnik

    2014-02-01

    Full Text Available In this paper we determine, or give lower and upper bounds on, the 2-dipath and oriented L(2, 1-span of the family of planar graphs, planar graphs with girth 5, 11, 16, partial k-trees, outerplanar graphs and cacti.

  19. Neutral current in reduced minimal 3-3-1 model

    International Nuclear Information System (INIS)

    Vu Thi Ngoc Huyen; Hoang Ngoc Long; Tran Thanh Lam; Vo Quoc Phong

    2014-01-01

    This work is devoted for gauge boson sector of the recently proposed model based on SU(3) C ⊗SU(3) L ⊗ U(1) X group with minimal content of leptons and Higgs. The limits on the masses of the bilepton gauge bosons and on the mixing angle among the neutral ones are deduced. Using the Fritzsch anzats on quark mixing, we show that the third family of quarks should be different from the first two. We obtain a lower bound on mass of the new heavy neutral gauge boson as 4.032 TeV. Using data on branching decay rates of the Z boson, we can fix the limit to the Z and Z' mixing angle φ as - 0.001 ≤ φ ≤ 0.0003. (author)

  20. Long-term convergence of speech rhythm in L1 and L2 English

    NARCIS (Netherlands)

    Quené, H; Orr, Rosemary

    2014-01-01

    When talkers from various language backgrounds use L2 English as a lingua franca, their accents of English are expected to converge, and talkers’ rhythmical patterns are predicted to converge too. Prosodic convergence was studied among talkers who lived in a community where L2 English is used

  1. Control of browning of minimally processed mangoes subjected to ultraviolet radiation pulses.

    Science.gov (United States)

    de Sousa, Aline Ellen Duarte; Fonseca, Kelem Silva; da Silva Gomes, Wilny Karen; Monteiro da Silva, Ana Priscila; de Oliveira Silva, Ebenézer; Puschmann, Rolf

    2017-01-01

    The pulsed ultraviolet radiation (UV P ) has been used as an alternative strategy for the control of microorganisms in food. However, its application causes the browning of minimally processed fruits and vegetables. In order to control the browning of the 'Tommy Atkins' minimally processed mango and treated with UV P (5.7 J cm -2 ) it was used 1-methylcyclopropene (1-MCP) (0.5 μL L -1 ), an ethylene action blocker in separate stages, comprising five treatments: control, UV P (U), 1-MCP + UV P (M + U), UV P  + 1-MCP (U + M) e 1-MCP + UV P  + 1-MCP (M + U + M). At the 1st, 7th and 14th days of storage at 12 °C, we evaluated the color (L* and b*), electrolyte leakage, polyphenol oxidase, total extractable polyphenols, vitamin C and total antioxidant activity. The 1-MCP, when applied before UV P , prevented the loss of vitamin C and when applied in a double dose, retained the yellow color (b*) of the cubes. However, the 1-MCP reduced lightness (L*) of independent mango cubes whatever applied before and/or after the UV P . Thus, the application of 1-MCP did not control, but intensified the browning of minimally processed mangoes irradiated with UV P .

  2. minimal pairs of polytopes and their number of vertices

    African Journals Online (AJOL)

    Preferred Customer

    Using this operation we give a new algorithm to reduce and find a minimal pair of polytopes from the given ... Key words/phrases: Pairs of compact convex sets, Blaschke addition, Minkowski sum, mnimality ... product K(X)×K(X) by K2. (X).

  3. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    Science.gov (United States)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  4. Smooth Approximation l 0-Norm Constrained Affine Projection Algorithm and Its Applications in Sparse Channel Estimation

    Science.gov (United States)

    2014-01-01

    We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588

  5. Exploring the minimal 4D N=1 SCFT

    Energy Technology Data Exchange (ETDEWEB)

    Poland, David [Department of Physics, Yale University,New Haven, CT 06520 (United States); School of Natural Sciences, Institute for Advanced Study,Princeton, NJ 08540 (United States); Stergiou, Andreas [Department of Physics, Yale University,New Haven, CT 06520 (United States)

    2015-12-17

    We study the conformal bootstrap constraints for 4D N=1 superconformal field theories containing a chiral operator ϕ and the chiral ring relation ϕ{sup 2}=0. Hints for a minimal interacting SCFT in this class have appeared in previous numerical bootstrap studies. We perform a detailed study of the properties of this conjectured theory, establishing that the corresponding solution to the bootstrap constraints contains a U(1){sub R} current multiplet and estimating the central charge and low-lying operator spectrum of this theory.

  6. Avaliação da folha e do colmo de topo e base de perfilhos de três gramíneas forrageiras: 2. Anatomia Evaluation of top and bottom leaf and stem fractions from tiller of three forage grasses: 2. Anatomy

    Directory of Open Access Journals (Sweden)

    Domingos Sávio Queiroz

    2000-02-01

    Full Text Available RESUMO - A proporção de tecidos, o grau de correlação linear desta característica com a digestibilidade in vitro da matéria seca (DIVMS e sua composição química foram determinadas em seções transversais das frações botânicas, lâmina e bainha foliares e colmo, amostrados no topo e na base de perfilhos de capim-elefante (Pennisetum purpureum, cv. Mott, capim-setária (Setaria anceps, cv. kazungula e capim-jaraguá (Hyparrhenia rufa. O capim-jaraguá, com maior proporção de bainha parenquimática dos feixes (BPF na lâmina foliar e de tecido vascular lignificado (TVL e esclerênquima (ESC na lâmina e bainha foliares, apresentou proporção de tecidos menos compatível à de uma forrageira de alto valor nutritivo, em comparação ao capim-elefante e capim-setária. As lâminas foliares caracterizaram-se por apresentar alta proporção de epiderme e baixa proporção de ESC, TVL e células parenquimáticas (CPA em relação à bainha foliar e ao colmo. A proporção de ESC mostrou correlação negativa com a DIVMS da lâmina foliar de topo, do colmo e do total das frações do perfilho. A proporção de CPA correlacionou positivamente com a DIVMS da bainha foliar, r = 0,68, enquanto a proporção de TVL apresentou correlação positiva com a DIVMS, quando todas as frações do perfilho foram consideradas, r = 0,31. As proporções de BPF, TVL e ESC correlacionaram positivamente com os teores de fibra em detergente neutro e fibra em detergente ácido das forrageiras, enquanto as proporções de mesofilo e epiderme apresentaram correlação negativa.ABSTRACT - The tissue proportions, the degree of simple linear correlation of this characteristics with the in vitro dry matter disappearance (IVDMD and their chemical composition were determined in transversal sections of the botanical fractions, leaf blades and sheath and stem sampled from the top and bottom tillers of dwarf elefantgrass (Pennisetum purpureum, Schumach cv. Mott

  7. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    Science.gov (United States)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work

  8. L-Myo-inositol 1-phosphate synthase in the aquatic fern Azolla filiculoides.

    Science.gov (United States)

    Benaroya, Rony Oren; Zamski, Eli; Tel-Or, Elisha

    2004-02-01

    L-Myo-inositol 1-phosphate synthase (INPS EC 5.5.1.4) catalyzes the conversion of D-glucose 6-phosphate to L-myo-inositol 1-phosphate. INPS is a key enzyme involved in the biosynthesis of phytate which is a common form of stored phosphates in higher plants. The present study monitored the increase of INPS expression in Azolla filiculoides resulting from exposure to inorganic phosphates, metals and salt stress. The expression of INPS was significantly higher in Azolla plants that were grown in rich mineral growth medium than those maintained on nutritional growth medium. The expression of INPS protein and corresponding mRNA increased in plants cultured in minimal nutritional growth medium when phosphate or Zn2+, Cd2+ and NaCl were added to the growth medium. When employing rich mineral growth medium, INPS protein content increased with the addition of Zn2+, but decreased in the presence of Cd2+ and NaCl. These results indicated that accumulation of phytate in Azolla is a result of the intensified expression of INPS protein and mRNA, and its regulation may be primarily derived by the uptake of inorganic phosphate, and Zn2+, Cd2+ or NaCl.

  9. ɛ '/ ɛ anomaly and neutron EDM in SU(2) L × SU(2) R × U(1) B- L model with charge symmetry

    Science.gov (United States)

    Haba, Naoyuki; Umeeda, Hiroyuki; Yamada, Toshifumi

    2018-05-01

    The Standard Model prediction for ɛ '/ ɛ based on recent lattice QCD results exhibits a tension with the experimental data. We solve this tension through W R + gauge boson exchange in the SU(2) L × SU(2) R × U(1) B- L model with `charge symmetry', whose theoretical motivation is to attribute the chiral structure of the Standard Model to the spontaneous breaking of SU(2) R × U(1) B- L gauge group and charge symmetry. We show that {M_W}{_R}study a correlation between ɛ ' /ɛ and the neutron EDM. We confirm that the model can solve the ɛ ' /ɛ anomaly without conflicting the current bound on the neutron EDM, and further reveal that almost all parameter regions in which the ɛ ' /ɛ anomaly is explained will be covered by future neutron EDM searches, which leads us to anticipate the discovery of the neutron EDM.

  10. Development of Real-Time Precise Positioning Algorithm Using GPS L1 Carrier Phase Data

    Directory of Open Access Journals (Sweden)

    Jeong-Ho Joh

    2002-12-01

    Full Text Available We have developed Real-time Phase DAta Processor(RPDAP for GPS L1 carrier. And also, we tested the RPDAP's positioning accuracy compared with results of real time kinematic(RTK positioning. While quality of the conventional L1 RTK positioning highly depend on receiving condition, the RPDAP can gives more stable positioning result because of different set of common GPS satellites, which searched by elevation mask angle and signal strength. In this paper, we demonstrated characteristics of the RPDAP compared with the L1 RTK technique. And we discussed several improvement ways to apply the RPDAP to precise real-time positioning using low-cost GPS receiver. With correcting the discussed weak points in near future, the RPDAP will be used in the field of precise real-time application, such as precise car navigation and precise personal location services.

  11. Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction.

    Science.gov (United States)

    Nikolova, Mila; Ng, Michael K; Tam, Chi-Pan

    2010-12-01

    Nonconvex nonsmooth regularization has advantages over convex regularization for restoring images with neat edges. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonconvex nonsmooth minimization. In this paper, we deal with nonconvex nonsmooth minimization methods for image restoration and reconstruction. Our theoretical results show that the solution of the nonconvex nonsmooth minimization problem is composed of constant regions surrounded by closed contours and neat edges. The main goal of this paper is to develop fast minimization algorithms to solve the nonconvex nonsmooth minimization problem. Our experimental results show that the effectiveness and efficiency of the proposed algorithms.

  12. Disturbance observer-based L1 robust tracking control for hypersonic vehicles with T-S disturbance modeling

    Directory of Open Access Journals (Sweden)

    Yang Yi

    2016-11-01

    Full Text Available This article concerns a disturbance observer-based L1 robust anti-disturbance tracking algorithm for the longitudinal models of hypersonic flight vehicles with different kinds of unknown disturbances. On one hand, by applying T-S fuzzy models to represent those modeled disturbances, a disturbance observer relying on T-S disturbance models can be constructed to track the dynamics of exogenous disturbances. On the other hand, L1 index is introduced to analyze the attenuation performance of disturbance for those unmodeled disturbances. By utilizing the existing convex optimization algorithm, a disturbance observer-based proportional-integral-controlled input is proposed such that the stability of hypersonic flight vehicles can be ensured and the tracking error for velocity and altitude in hypersonic flight vehicle models can converge to equilibrium point. Furthermore, the satisfactory disturbance rejection and attenuation with L1 index can be obtained simultaneously. Simulation results on hypersonic flight vehicle models can reflect the feasibility and effectiveness of the proposed control algorithm.

  13. Minimization of cogging torque in permanent magnet motors by teeth pairing and magnet arc design using genetic algorithm

    International Nuclear Information System (INIS)

    Eom, J.-B.; Hwang, S.-M.; Kim, T.-J.; Jeong, W.-B.; Kang, B.-S.

    2001-01-01

    Cogging torque is often a principal source of vibration and acoustic noise in high precision spindle motor applications. In this paper, cogging torque is analytically calculated using energy method with Fourier series expansion. It shows that cogging torque is effectively minimized by controlling airgap permeance function with teeth pairing design, and by controlling flux density function with magnet arc design. For an optimization technique, genetic algorithm is applied to handle trade-off effects of design parameters. Results show that the proposed method can reduce the cogging torque effectively

  14. Anti-PD-L1 Treatment Induced Central Diabetes Insipidus.

    Science.gov (United States)

    Zhao, Chen; Tella, Sri Harsha; Del Rivero, Jaydira; Kommalapati, Anuhya; Ebenuwa, Ifechukwude; Gulley, James; Strauss, Julius; Brownell, Isaac

    2018-02-01

    Immune checkpoint inhibitors, including anti-programmed cell death protein 1 (PD-1), anti-programmed cell death protein ligand 1 (PD-L1), and anti-cytotoxic T-lymphocyte antigen 4 (anti-CTLA4) monoclonal antibodies, have been widely used in cancer treatment. They are known to cause immune-related adverse events (irAEs), which resemble autoimmune diseases. Anterior pituitary hypophysitis with secondary hypopituitarism is a frequently reported irAE, especially in patients receiving anti-CTLA4 treatment. In contrast, posterior pituitary involvement, such as central diabetes insipidus (DI), is relatively rare and is unreported in patients undergoing PD-1/PD-L1 blockade. We describe a case of a 73-year-old man with Merkel cell carcinoma who received the anti-PD-L1 monoclonal antibody avelumab and achieved partial response. The patient developed nocturia, polydipsia, and polyuria 3 months after starting avelumab. Further laboratory testing revealed central DI. Avelumab was held and he received desmopressin for the management of central DI. Within 6 weeks after discontinuation of avelumab, the patient's symptoms resolved and he was eventually taken off desmopressin. The patient remained off avelumab and there were no signs or symptoms of DI 2 months after the discontinuation of desmopressin. To our knowledge, this is the first report of central DI associated with anti-PD-L1 immunotherapy. The patient's endocrinopathy was successfully managed by holding treatment with the immune checkpoint inhibitor. This case highlights the importance of early screening and appropriate management of hormonal irAEs in subjects undergoing treatment with immune checkpoint inhibitors to minimize morbidity and mortality. Copyright © 2017 Endocrine Society

  15. Greedy algorithm with weights for decision tree construction

    KAUST Repository

    Moshkov, Mikhail

    2010-01-01

    An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.

  16. Greedy algorithm with weights for decision tree construction

    KAUST Repository

    Moshkov, Mikhail

    2010-12-01

    An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.

  17. A Novel Role of IGF1 in Apo2L/TRAIL-Mediated Apoptosis of Ewing Tumor Cells

    Directory of Open Access Journals (Sweden)

    Frans van Valen

    2012-01-01

    Full Text Available Insulin-like growth factor 1 (IGF1 reputedly opposes chemotoxicity in Ewing sarcoma family of tumor (ESFT cells. However, the effect of IGF1 on apoptosis induced by apoptosis ligand 2 (Apo2L/tumor necrosis factor (TNF- related apoptosis-inducing ligand (TRAIL remains to be established. We find that opposite to the partial survival effect of short-term IGF1 treatment, long-term IGF1 treatment amplified Apo2L/TRAIL-induced apoptosis in Apo2L/TRAIL-sensitive but not resistant ESFT cell lines. Remarkably, the specific IGF1 receptor (IGF1R antibody α-IR3 was functionally equivalent to IGF1. Short-term IGF1 incubation of cells stimulated survival kinase AKT and increased X-linked inhibitor of apoptosis (XIAP protein which was associated with Apo2L/TRAIL resistance. In contrast, long-term IGF1 incubation resulted in repression of XIAP protein through ceramide (Cer formation derived from de novo synthesis which was associated with Apo2L/TRAIL sensitization. Addition of ceramide synthase (CerS inhibitor fumonisin B1 during long-term IGF1 treatment reduced XIAP repression and Apo2L/TRAIL-induced apoptosis. Noteworthy, the resistance to conventional chemotherapeutic agents was maintained in cells following chronic IGF1 treatment. Overall, the results suggest that chronic IGF1 treatment renders ESFT cells susceptible to Apo2L/TRAIL-induced apoptosis and may have important implications for the biology as well as the clinical management of refractory ESFT.

  18. Technical Note: A novel leaf sequencing optimization algorithm which considers previous underdose and overdose events for MLC tracking radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Wisotzky, Eric, E-mail: eric.wisotzky@charite.de, E-mail: eric.wisotzky@ipk.fraunhofer.de; O’Brien, Ricky; Keall, Paul J., E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney, NSW 2006 (Australia)

    2016-01-15

    Purpose: Multileaf collimator (MLC) tracking radiotherapy is complex as the beam pattern needs to be modified due to the planned intensity modulation as well as the real-time target motion. The target motion cannot be planned; therefore, the modified beam pattern differs from the original plan and the MLC sequence needs to be recomputed online. Current MLC tracking algorithms use a greedy heuristic in that they optimize for a given time, but ignore past errors. To overcome this problem, the authors have developed and improved an algorithm that minimizes large underdose and overdose regions. Additionally, previous underdose and overdose events are taken into account to avoid regions with high quantity of dose events. Methods: The authors improved the existing MLC motion control algorithm by introducing a cumulative underdose/overdose map. This map represents the actual projection of the planned tumor shape and logs occurring dose events at each specific regions. These events have an impact on the dose cost calculation and reduce recurrence of dose events at each region. The authors studied the improvement of the new temporal optimization algorithm in terms of the L1-norm minimization of the sum of overdose and underdose compared to not accounting for previous dose events. For evaluation, the authors simulated the delivery of 5 conformal and 14 intensity-modulated radiotherapy (IMRT)-plans with 7 3D patient measured tumor motion traces. Results: Simulations with conformal shapes showed an improvement of L1-norm up to 8.5% after 100 MLC modification steps. Experiments showed comparable improvements with the same type of treatment plans. Conclusions: A novel leaf sequencing optimization algorithm which considers previous dose events for MLC tracking radiotherapy has been developed and investigated. Reductions in underdose/overdose are observed for conformal and IMRT delivery.

  19. Waste minimization handbook, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.

  20. Waste minimization handbook, Volume 1

    International Nuclear Information System (INIS)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility's life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996

  1. Synthesis, crystal structure and magnetic studies of tetranuclear hydroxo and ligand bridged [Co4(μ3-OH)22-dea)2(L-L)4]4Cl·8H2O [L-L = 2,2'-bipyridine or 1,10-phenanthroline] complexes with mixed valence defect dicubane core.

    Science.gov (United States)

    Siddiqi, Zafar A; Siddique, Armeen; Shahid, M; Khalid, Mohd; Sharma, Prashant K; Anjuli; Ahmad, Musheer; Kumar, Sarvendra; Lan, Yanhua; Powell, Annie K

    2013-07-14

    X-ray crystallography of the title complexes indicates a discrete mixed valence (Co2(II)-Co2(III)) defect dicubane molecular unit where each cobalt nucleus attains a distorted octahedral geometry. The α-diimine (L-L) chelator coordinated to each cobalt ion stops further polymerization or nuclearization. The water molecules in the lattice play a crucial role in the formation of the supramolecular architectures. Magnetic data were analyzed using the effective spin-1/2 Hamiltonian approach and the parameters are, J = 115(6) K, ΔJ = -57.0(1.2) K, g(xy) = 3.001(25), and g(z) = 7.214(7) for 1 and J = 115(12) K, ΔJ = -58.5(2.5) K, g(xy) = 3.34(5), and g(z) = 6.599(12) for 2 suggesting that only the g matrices are prone to the change of α-diimine chelator.

  2. IL FENOMENO DELL’ALTERNANZA L1/L2 NELL’INSEGNAMENTO DELL’ITALIANO COME LINGUA STRANIERA. ANALISI DI UN CORPUS DI INTERAZIONI DIDATTICHE

    Directory of Open Access Journals (Sweden)

    Paola Arrigoni

    2012-02-01

    Full Text Available La modalità plurilingue della comunicazione in classi di lingua straniera può rientrare tra le strategie pedagogiche a disposizione dell’insegnante per il raggiungimento di precisi scopi didattici e formativi. Partendo da una breve analisi sul significato di parlante plurilingue e plurilinguismo stesso, in questo articolo si è voluto esaminare come si attua una educazione al plurilinguismo in contesti formativi e, in particolare, se e come l’utilizzo della L1 può costituire uno strumento di supporto all’insegnamento e all’apprendimento di una lingua straniera. A questo scopo sono stati analizzati i dati raccolti presso l’Università di Coventry durante corsi di italiano L2 per studenti anglofoni. I fenomeni di contatto linguistico L1/L2 più frequenti e significativi sono stati suddivisi in base al parlante e alle loro funzioni.   The phenomena of l1/l2 alternation in the teaching of italian as a foreign language. analysis of a corpus of didactic interactions Multilingual communication in foreign language classrooms can be considered as one of the teacher’s pedagogical  strategies to achieve specific educational aims. Through a brief analysis of the meaning of multilingual speakers and multilingualism, this article examines how an education to multilingualism in teaching contexts is carried out and, in particular, if and how the use of the L1 can be a helpful tool for the foreign language teaching and learning. For this purpose the data collected at Coventry University during Italian (L2 classrooms for English students were examined. The more common and meaningful phenomena of linguistic contact between L1 and L2 were grouped according to the speaker and to their functions.

  3. SUL(3)xUX(1)-invariant description of the bilepton contribution to the WWV vertex in the minimal 331 model

    International Nuclear Information System (INIS)

    Montano, J.; Tavares-Velasco, G.; Toscano, J.J.; Ramirez-Zavaleta, F.

    2005-01-01

    We study the one-loop sensitivity of the WWV (V=γ,Z) vertex to the new massive gauge bosons predicted by the minimal SU L (3)xU X (1) model, which have unusual couplings to the standard model (SM) gauge bosons. A gauge-fixing procedure covariant under the SU L (2)xU Y (1) group was introduced for these new gauge bosons (dubbed bileptons) in order to generate gauge-invariant Green functions. The similarities between this procedure and the unconventional quantization scheme of the background field method are discussed. It is found that, for relatively light bileptons, with a mass ranging from 2m W to 6m W , the radiative corrections to the form factors associated with the WWV vertex can be of the same order of magnitude than the SM one. In the case of heavier bileptons, their contribution is smaller by about one and 2 orders of magnitude than their SM counterpart

  4. Nephrogenic diabetes insipidus in a patient with L1 syndrome: a new report of a contiguous gene deletion syndrome including L1CAM and AVPR2.

    Science.gov (United States)

    Knops, Noël B B; Bos, Krista K; Kerstjens, Mieke; van Dael, Karin; Vos, Yvonne J

    2008-07-15

    We report on an infant boy with congenital hydrocephalus due to L1 syndrome and polyuria due to diabetes insipidus. We initially believed his excessive urine loss was from central diabetes insipidus and that the cerebral malformation caused a secondary insufficient pituitary vasopressin release. However, he failed to respond to treatment with a vasopressin analogue, which pointed to nephrogenic diabetes insipidus (NDI). L1 syndrome and X-linked NDI are distinct clinical disorders caused by mutations in the L1CAM and AVPR2 genes, respectively, located in adjacent positions in Xq28. In this boy we found a deletion of 61,577 basepairs encompassing the entire L1CAM and AVPR2 genes and extending into intron 7 of the ARHGAP4 gene. To our knowledge this is the first description of a patient with a deletion of these three genes. He is the second patient to be described with L1 syndrome and NDI. During follow-up he manifested complications from the hydrocephalus and NDI including global developmental delay and growth failure with low IGF-1 and hypothyroidism. 2008 Wiley-Liss, Inc.

  5. A Superlinearly Convergent O(square root of nL)-Iteration Algorithm for Linear Programming

    National Research Council Canada - National Science Library

    Ye, Y; Tapia, Richard A; Zhang, Y

    1991-01-01

    .... We demonstrate that the modified algorithm maintains its O(square root of nL)-iteration complexity, while exhibiting superlinear convergence for general problems and quadratic convergence for nondegenerate problems...

  6. Algorithm 426 : Merge sort algorithm [M1

    NARCIS (Netherlands)

    Bron, C.

    1972-01-01

    Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives

  7. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    Science.gov (United States)

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  8. Experiment prediction for LOFT nuclear experiments L5-1 and L8-2

    International Nuclear Information System (INIS)

    Chen, T.H.; Modro, S.M.

    1983-01-01

    The LOFT Experiments L5-1 and L8-2 simulated intermediate break loss-of-coolant accidents with core uncovery. This paper compares the predictions with the measured data for these experiments. The RELAP5 code was used to perform best estimate double-blind and single-blind predictions. The double-blind calculations are performed prior to the experiment and use specified nominal initial and boundary conditions. The single-blind calculations are performed after the experiment and use measured initial and boundary conditions while maintaining all other parameters constant, including the code version. Comparisons of calculated results with experimental results are discussed; the possible causes of discrepancies are explored and explained. RELAP5 calculated system pressure, mass inventory, and fuel cladding temperature agree reasonably well with the experiment results, and only slight changes are noted between the double-blind and single-blind predictions

  9. The Role of SES in Chinese (L1) and English (L2) Word Reading in Chinese-Speaking Kindergarteners

    Science.gov (United States)

    Liu, Duo; Chung, Kevin K. H.; McBride, Catherine

    2016-01-01

    The present study investigated the relationships between socioeconomic status (SES) and word reading in both Chinese (L1) and English (L2), with children's cognitive/linguistic skills considered as mediators and/or moderators. One hundred ninety-nine Chinese kindergarteners in Hong Kong with diverse SES backgrounds participated in this study. SES…

  10. L1 and L2 Picture Naming in Mandarin-English Bilinguals: A Test of Bilingual Dual Coding Theory

    Science.gov (United States)

    Jared, Debra; Poh, Rebecca Pei Yun; Paivio, Allan

    2013-01-01

    This study examined the nature of bilinguals' conceptual representations and the links from these representations to words in L1 and L2. Specifically, we tested an assumption of the Bilingual Dual Coding Theory that conceptual representations include image representations, and that learning two languages in separate contexts can result in…

  11. Comparative studies of the endonucleases from two related Xenopus laevis retrotransposons, Tx1L and Tx2L: target site specificity and evolutionary implications.

    Science.gov (United States)

    Christensen, S; Pont-Kingdon, G; Carroll, D

    2000-01-01

    In the genome of the South African frog, Xenopus laevis, there are two complex families of transposable elements, Tx1 and Tx2, that have identical overall structures, but distinct sequences. In each family there are approximately 1500 copies of an apparent DNA-based element (Tx1D and Tx2D). Roughly 10% of these elements in each family are interrupted by a non-LTR retrotransposon (Tx1L and Tx2L). Each retrotransposon is flanked by a 23-bp target duplication of a specific D element sequence. In earlier work, we showed that the endonuclease domain (Tx1L EN) located in the second open reading frame (ORF2) of Tx1L encodes a protein that makes a single-strand cut precisely at the expected site within its target sequence, supporting the idea that Tx1L is a site-specific retrotransposon. In this study, we express the endonuclease domain of Tx2L (Tx2L EN) and compare the target preferences of the two enzymes. Each endonuclease shows some preference for its cognate target, on the order of 5-fold over the non-cognate target. The observed discrimination is not sufficient, however, to explain the observation that no cross-occupancy is observed - that is, L elements of one family have never been found within D elements of the other family. Possible sources of additional specificity are discussed. We also compare two hypotheses regarding the genome duplication event that led to the contemporary pseudotetraploid character of Xenopus laevis in light of the Tx1L and Tx2L data.

  12. Bioactivation mechanism of the cytotoxic and nephrotoxic S-conjugate S-(2-chloro-1,1,2-trifluoroethyl)-L-cysteine

    International Nuclear Information System (INIS)

    Dekant, W.; Lash, L.H.; Anders, M.W.

    1987-01-01

    The bioactivation of S-(2-chloro-1,1,2-trifluoroethyl)-L-cysteine (CTFC) was studied with purified bovine kidney cysteine conjugate β-lyase and with N-dodecylpyridoxal bromide in cetyltrimethylammonium bromide micelles as a pyridoxal model system. The β-lyase and the pyridoxal model system converted CTFC to chlorofluoroacetic acid and inorganic fluoride, which were identified by 19 F NMR spectrometry. 2-Chloro-1,1,2-trifluoroethanethiol and chlorofluorothionoacetyl fluoride were formed as metabolites of CTFC and were trapped with benzyl bromide and diethylamine, respectively, to yield benzyl 2-chloro-1,1,2-trifluoroethyl sulfide and N,N-diethyl chlorofluorothioacetamide, which were identified by gas chromatography/mass spectrometry. The bioactivation mechanism of CTFC therefore involves the initial formation of the unstable thiol 2-chloro-1,1,2-trifluoroethanethiol, which loses hydrogen fluoride to form the acylating agent chlorofluorothionoacetyl fluoride; hydrolysis of the thionoacyl fluoride affords the stable, terminal metabolites chlorofluoroacetic acid and inorganic fluoride. The intermediate acylating agent and chlorofluoroacetic acid may contribute to the cytotoxic effects of CTFC

  13. Minimizing Total Completion Time For Preemptive Scheduling With Release Dates And Deadline Constraints

    Directory of Open Access Journals (Sweden)

    He Cheng

    2014-02-01

    Full Text Available It is known that the single machine preemptive scheduling problem of minimizing total completion time with release date and deadline constraints is NP- hard. Du and Leung solved some special cases by the generalized Baker's algorithm and the generalized Smith's algorithm in O(n2 time. In this paper we give an O(n2 algorithm for the special case where the processing times and deadlines are agreeable. Moreover, for the case where the processing times and deadlines are disagreeable, we present two properties which could enable us to reduce the range of the enumeration algorithm

  14. Evaluation of the Effects of S-Allyl-L-cysteine, S-Methyl-L-cysteine, trans-S-1-Propenyl-L-cysteine, and Their N-Acetylated and S-Oxidized Metabolites on Human CYP Activities.

    Science.gov (United States)

    Amano, Hirotaka; Kazamori, Daichi; Itoh, Kenji

    2016-01-01

    Three major organosulfur compounds of aged garlic extract, S-allyl-L-cysteine (SAC), S-methyl-L-cysteine (SMC), and trans-S-1-propenyl-L-cysteine (S1PC), were examined for their effects on the activities of five major isoforms of human CYP enzymes: CYP1A2, 2C9, 2C19, 2D6, and 3A4. The metabolite formation from probe substrates for the CYP isoforms was examined in human liver microsomes in the presence of organosulfur compounds at 0.01-1 mM by using liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis. Allicin, a major component of garlic, inhibited CYP1A2 and CYP3A4 activity by 21-45% at 0.03 mM. In contrast, a CYP2C9-catalyzed reaction was enhanced by up to 1.9 times in the presence of allicin at 0.003-0.3 mM. SAC, SMC, and S1PC had no effect on the activities of the five isoforms, except that S1PC inhibited CYP3A4-catalyzed midazolam 1'-hydroxylation by 31% at 1 mM. The N-acetylated metabolites of the three compounds inhibited the activities of several isoforms to a varying degree at 1 mM. N-Acetyl-S-allyl-L-cysteine and N-acetyl-S-methyl-L-cysteine inhibited the reactions catalyzed by CYP2D6 and CYP1A2, by 19 and 26%, respectively, whereas trans-N-acetyl-S-1-propenyl-L-cysteine showed weak to moderate inhibition (19-49%) of CYP1A2, 2C19, 2D6, and 3A4 activities. On the other hand, both the N-acetylated and S-oxidized metabolites of SAC, SMC, and S1PC had little effect on the reactions catalyzed by the five isoforms. These results indicated that SAC, SMC, and S1PC have little potential to cause drug-drug interaction due to CYP inhibition or activation in vivo, as judged by their minimal effects (IC 50 >1 mM) on the activities of five major isoforms of human CYP in vitro.

  15. Minimization of decision tree depth for multi-label decision tables

    KAUST Repository

    Azad, Mohammad

    2014-10-01

    In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.

  16. Minimization of decision tree depth for multi-label decision tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.

  17. PM1 steganographic algorithm using ternary Hamming Code

    Directory of Open Access Journals (Sweden)

    Kamil Kaczyński

    2015-12-01

    Full Text Available PM1 algorithm is a modification of well-known LSB steganographic algorithm. It has increased resistance to selected steganalytic attacks and increased embedding efficiency. Due to its uniqueness, PM1 algorithm allows us to use of larger alphabet of symbols, making it possible to further increase steganographic capacity. In this paper, we present the modified PM1 algorithm which utilizies so-called syndrome coding and ternary Hamming code. The modified algorithm has increased embedding efficiency, which means fewer changes introduced to carrier and increased capacity.[b]Keywords[/b]: steganography, linear codes, PM1, LSB, ternary Hamming code

  18. Minimal DBM Substraction

    DEFF Research Database (Denmark)

    David, Alexandre; Håkansson, John; G. Larsen, Kim

    In this paper we present an algorithm to compute DBM substractions with a guaranteed minimal number of splits and disjoint DBMs to avoid any redundance. The substraction is one of the few operations that result in a non-convex zone, and thus, requires splitting. It is of prime importance to reduce...

  19. Technical note: Intercomparison of three AATSR Level 2 (L2 AOD products over China

    Directory of Open Access Journals (Sweden)

    Y. Che

    2016-08-01

    Full Text Available One of four main focus areas of the PEEX initiative is to establish and sustain long-term, continuous, and comprehensive ground-based, airborne, and seaborne observation infrastructure together with satellite data. The Advanced Along-Track Scanning Radiometer (AATSR aboard ENVISAT is used to observe the Earth in dual view. The AATSR data can be used to retrieve aerosol optical depth (AOD over both land and ocean, which is an important parameter in the characterization of aerosol properties. In recent years, aerosol retrieval algorithms have been developed both over land and ocean, taking advantage of the features of dual view, which can help eliminate the contribution of Earth's surface to top-of-atmosphere (TOA reflectance. The Aerosol_cci project, as a part of the Climate Change Initiative (CCI, provides users with three AOD retrieval algorithms for AATSR data, including the Swansea algorithm (SU, the ATSR-2ATSR dual-view aerosol retrieval algorithm (ADV, and the Oxford-RAL Retrieval of Aerosol and Cloud algorithm (ORAC. The validation team of the Aerosol-CCI project has validated AOD (both Level 2 and Level 3 products and AE (Ångström Exponent (Level 2 product only against the AERONET data in a round-robin evaluation using the validation tool of the AeroCOM (Aerosol Comparison between Observations and Models project. For the purpose of evaluating different performances of these three algorithms in calculating AODs over mainland China, we introduce ground-based data from CARSNET (China Aerosol Remote Sensing Network, which was designed for aerosol observations in China. Because China is vast in territory and has great differences in terms of land surfaces, the combination of the AERONET and CARSNET data can validate the L2 AOD products more comprehensively. The validation results show different performances of these products in 2007, 2008, and 2010. The SU algorithm performs very well over sites with different surface conditions in mainland

  20. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    Science.gov (United States)

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  1. A Local and Global Search Combine Particle Swarm Optimization Algorithm for Job-Shop Scheduling to Minimize Makespan

    Directory of Open Access Journals (Sweden)

    Zhigang Lian

    2010-01-01

    Full Text Available The Job-shop scheduling problem (JSSP is a branch of production scheduling, which is among the hardest combinatorial optimization problems. Many different approaches have been applied to optimize JSSP, but for some JSSP even with moderate size cannot be solved to guarantee optimality. The original particle swarm optimization algorithm (OPSOA, generally, is used to solve continuous problems, and rarely to optimize discrete problems such as JSSP. In OPSOA, through research I find that it has a tendency to get stuck in a near optimal solution especially for middle and large size problems. The local and global search combine particle swarm optimization algorithm (LGSCPSOA is used to solve JSSP, where particle-updating mechanism benefits from the searching experience of one particle itself, the best of all particles in the swarm, and the best of particles in neighborhood population. The new coding method is used in LGSCPSOA to optimize JSSP, and it gets all sequences are feasible solutions. Three representative instances are made computational experiment, and simulation shows that the LGSCPSOA is efficacious for JSSP to minimize makespan.

  2. Evaluation of the effects of gamma radiation on physical and chemical characteristics of pineapple (Annanas Comosus (L.) (Meer) cv. smooth Cayenne minimally processed

    Energy Technology Data Exchange (ETDEWEB)

    Perecin, Thalita N.; Oliveira, Ana Claudia S.; Silva, Lucia C.A.; Costa, Marcia H.N.; Arthur, Valter [Centro de Energia Nuclear na Agricultura (CENA/USP), Piracicaba, SP (Brazil). Lab. de Radiobiologia e Ambiente], e-mail: arthur@cena.usp.br

    2009-07-01

    This study aimed to evaluate the effects of gamma radiation, polypropylene packaging type and temperature (8 deg C) in the physic-chemical characteristics of pineapple 'Smooth Cayenne' minimally processed. The fruits were selected, washed, peeled, sliced crosswise into four parts and placed in sodium hypochlorite 10 ml/L for three minutes, dried and packaged. Were irradiated in a Cobalt-60 source, type Gammacell -220 (dose rate 0.543 kGy/hour), with doses of 0 (control), 1 and 2 kGy and stored in temperature of 8 deg C. Were analyzed color (L factors, a, b), pH, deg Brix, texture, during 5 days after irradiation. The experiment was entirely at random with 3 replicates for each treatment. For the statistic analysis was used the Tuckey test at 5% level of probability. (author)

  3. Evaluation of the effects of gamma radiation on physical and chemical characteristics of pineapple (Annanas Comosus (L.) (Meer) cv. smooth Cayenne minimally processed

    International Nuclear Information System (INIS)

    Perecin, Thalita N.; Oliveira, Ana Claudia S.; Silva, Lucia C.A.; Costa, Marcia H.N.; Arthur, Valter

    2009-01-01

    This study aimed to evaluate the effects of gamma radiation, polypropylene packaging type and temperature (8 deg C) in the physic-chemical characteristics of pineapple 'Smooth Cayenne' minimally processed. The fruits were selected, washed, peeled, sliced crosswise into four parts and placed in sodium hypochlorite 10 ml/L for three minutes, dried and packaged. Were irradiated in a Cobalt-60 source, type Gammacell -220 (dose rate 0.543 kGy/hour), with doses of 0 (control), 1 and 2 kGy and stored in temperature of 8 deg C. Were analyzed color (L factors, a, b), pH, deg Brix, texture, during 5 days after irradiation. The experiment was entirely at random with 3 replicates for each treatment. For the statistic analysis was used the Tuckey test at 5% level of probability. (author)

  4. Minimally inconsistent reasoning in Semantic Web.

    Science.gov (United States)

    Zhang, Xiaowang

    2017-01-01

    Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning.

  5. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    CERN Document Server

    AUTHOR|(CDS)2090481

    2016-01-01

    At the HL-LHC, proton bunches collide every 25\\,ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5\\,$\\mu$s. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate. Goal of this new \\textit{track trigger} will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the ``MP7'', which is a $\\mu$TCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough tran...

  6. Transcatheter aortic valve implantation of the direct flow medical aortic valve with minimal or no contrast

    Energy Technology Data Exchange (ETDEWEB)

    Latib, Azeem, E-mail: alatib@gmail.com [Interventional Cardiology Unit, San Raffaele Scientific Institute and EMO-GVM Centro Cuore Columbus, Milan (Italy); Maisano, Francesco; Colombo, Antonio [Interventional Cardiology Unit, San Raffaele Scientific Institute and EMO-GVM Centro Cuore Columbus, Milan (Italy); Klugmann, Silvio [Azienda Ospedaliera Niguarda Ca Granda, Piazza Ospedale Maggiore 3, Milan (Italy); Low, Reginald; Smith, Thomas [University of California Davis, Davis, CA 95616 (United States); Davidson, Charles [Northwestern Memorial Hospital, Chicago, IL 60611 (United States); Harreld, John H. [Clinical Imaging Analytics, Guerneville, CA (United States); Bruschi, Giuseppe; DeMarco, Federico [Azienda Ospedaliera Niguarda Ca Granda, Piazza Ospedale Maggiore 3, Milan (Italy)

    2014-06-15

    The 18F Direct Flow Medical (DFM) THV has conformable sealing rings, which minimizes aortic regurgitation and permits full hemodynamic assessment of valve performance prior to permanent implantation. During the DISCOVER trial, three patients who were at risk for receiving contrast media, two due to severe CKD and one due to a recent hyperthyroid reaction to contrast, underwent DFM implantation under fluoroscopic and transesophageal guidance without aortography during either positioning or to confirm the final position. Valve positioning was based on the optimal angiographic projection as calculated by the pre-procedural multislice CT scan. Precise optimization of valve position was performed to minimize transvalve gradient and aortic regurgitation. Prior to final implantation, transvalve hemodynamics were assessed invasively and by TEE. The post-procedure mean gradients were 7, 10, 11 mm Hg. The final AVA by echo was 1.70, 1.40 and 1.68 cm{sup 2}. Total aortic regurgitation post-procedure was none or trace in all three patients. Total positioning and assessment of valve performance time was 4, 6, and 12 minutes. Contrast was only used to confirm successful percutaneous closure of the femoral access site. The total contrast dose was 5, 8, 12 cc. Baseline eGFR and creatinine was 28, 22, 74 mL/min/1.73 m{sup 2} and 2.35, 2.98, and 1.03 mg/dL, respectively. Renal function was unchanged post-procedure: eGFR = 25, 35, and 96 mL/min/1.73 m{sup 2} and creatinine = 2.58, 1.99, and 1.03 mg/dL, respectively. In conclusion, the DFM THV provides the ability to perform TAVI with minimal or no contrast. The precise and predictable implantation technique can be performed with fluoro and echo guidance.

  7. A New Finite Continuation Algorithm for Linear Programming

    DEFF Research Database (Denmark)

    Madsen, Kaj; Nielsen, Hans Bruun; Pinar, Mustafa

    1996-01-01

    We describe a new finite continuation algorithm for linear programming. The dual of the linear programming problem with unit lower and upper bounds is formulated as an $\\ell_1$ minimization problem augmented with the addition of a linear term. This nondifferentiable problem is approximated...... by a smooth problem. It is shown that the minimizers of the smooth problem define a family of piecewise-linear paths as a function of a smoothing parameter. Based on this property, a finite algorithm that traces these paths to arrive at an optimal solution of the linear program is developed. The smooth...

  8. ApoSOD1 lacking dismutase activity neuroprotects motor neurons exposed to beta-methylamino-L-alanine through the Ca2+/Akt/ERK1/2 prosurvival pathway

    Science.gov (United States)

    Petrozziello, Tiziana; Secondo, Agnese; Tedeschi, Valentina; Esposito, Alba; Sisalli, MariaJosè; Scorziello, Antonella; Di Renzo, Gianfranco; Annunziato, Lucio

    2017-01-01

    Amyotrophic lateral sclerosis (ALS) is a severe human adult-onset neurodegenerative disease affecting lower and upper motor neurons. In >20% of cases, the familial form of ALS is caused by mutations in the gene encoding Cu,Zn-superoxide dismutase (SOD1). Interestingly, administration of wild-type SOD1 to SOD1G93A transgenic rats ameliorates motor symptoms through an unknown mechanism. Here we investigated whether the neuroprotective effects of SOD1 are due to the Ca2+-dependent activation of such prosurvival signaling pathway and not to its catalytic activity. To this aim, we also examined the mechanism of neuroprotective action of ApoSOD1, the metal-depleted state of SOD1 that lacks dismutase activity, in differentiated motor neuron-like NSC-34 cells and in primary motor neurons exposed to the cycad neurotoxin beta-methylamino-L-alanine (L-BMAA). Preincubation of ApoSOD1 and SOD1, but not of human recombinant SOD1G93A, prevented cell death in motor neurons exposed to L-BMAA. Moreover, ApoSOD1 elicited ERK1/2 and Akt phosphorylation in motor neurons through an early increase of intracellular Ca2+ concentration ([Ca2+]i). Accordingly, inhibition of ERK1/2 by siMEK1 and PD98059 counteracted ApoSOD1- and SOD1-induced neuroprotection. Similarly, transfection of the dominant-negative form of Akt in NSC-34 motor neurons and treatment with the selective PI3K inhibitor LY294002 prevented ApoSOD1- and SOD1-mediated neuroprotective effects in L-BMAA-treated motor neurons. Furthermore, ApoSOD1 and SOD1 prevented the expression of the two markers of L-BMAA-induced ER stress GRP78 and caspase-12. Collectively, our data indicate that ApoSOD1, which is devoid of any catalytic dismutase activity, exerts a neuroprotective effect through an early activation of Ca2+/Akt/ERK1/2 pro-survival pathway that, in turn, prevents ER stress in a neurotoxic model of ALS. PMID:28085149

  9. Hybrid genetic algorithm for minimizing non productive machining ...

    African Journals Online (AJOL)

    user

    The movement of tool is synchronized with the help of these CNC codes. Total ... Lot of work has been reported for minimizing the productive time by ..... Optimal path for automated drilling operations by a new heuristic approach using particle.

  10. The Level of Autoantibodies Targeting Eukaryote Translation Elongation Factor 1 α1 and Ubiquitin-Conjugating Enzyme 2L3 in Nondiabetic Young Adults

    Directory of Open Access Journals (Sweden)

    Eunhee G. Kim

    2016-01-01

    Full Text Available BackgroundThe prevalence of novel type 1 diabetes mellitus (T1DM antibodies targeting eukaryote translation elongation factor 1 alpha 1 autoantibody (EEF1A1-AAb and ubiquitin-conjugating enzyme 2L3 autoantibody (UBE2L3-AAb has been shown to be negatively correlated with age in T1DM subjects. Therefore, we aimed to investigate whether age affects the levels of these two antibodies in nondiabetic subjects.MethodsEEF1A1-AAb and UBE2L3-AAb levels in nondiabetic control subjects (n=150 and T1DM subjects (n=101 in various ranges of age (18 to 69 years were measured using an enzyme-linked immunosorbent assay. The cutoff point for the presence of each autoantibody was determined based on control subjects using the formula: [mean absorbance+3×standard deviation].ResultsIn nondiabetic subjects, there were no significant correlations between age and EEF1A1-AAb and UBE2L3-AAb levels. However, there was wide variation in EEF1A1-AAb and UBE2L3-AAb levels among control subjects <40 years old; the prevalence of both EEF1A1-AAb and UBE2L3-AAb in these subjects was 4.4%. When using cutoff points determined from the control subjects <40 years old, the prevalence of both autoantibodies in T1DM subjects was decreased (EEFA1-AAb, 15.8% to 8.9%; UBE2L3-AAb, 10.9% to 7.9% when compared to the prevalence using the cutoff derived from the totals for control subjects.ConclusionThere was no association between age and EEF1A1-AAb or UBE2L3-AAb levels in nondiabetic subjects. However, the wide variation in EEF1A1-AAb and UBE2L3-AAb levels apparent among the control subjects <40 years old should be taken into consideration when determining the cutoff reference range for the diagnosis of T1DM.

  11. Arctic lead detection using a waveform mixture algorithm from CryoSat-2 data

    Directory of Open Access Journals (Sweden)

    S. Lee

    2018-05-01

    Full Text Available We propose a waveform mixture algorithm to detect leads from CryoSat-2 data, which is novel and different from the existing threshold-based lead detection methods. The waveform mixture algorithm adopts the concept of spectral mixture analysis, which is widely used in the field of hyperspectral image analysis. This lead detection method was evaluated with high-resolution (250 m MODIS images and showed comparable and promising performance in detecting leads when compared to the previous methods. The robustness of the proposed approach also lies in the fact that it does not require the rescaling of parameters (i.e., stack standard deviation, stack skewness, stack kurtosis, pulse peakiness, and backscatter σ0, as it directly uses L1B waveform data, unlike the existing threshold-based methods. Monthly lead fraction maps were produced by the waveform mixture algorithm, which shows interannual variability of recent sea ice cover during 2011–2016, excluding the summer season (i.e., June to September. We also compared the lead fraction maps to other lead fraction maps generated from previously published data sets, resulting in similar spatiotemporal patterns.

  12. A New Evolutionary Algorithm Based on Bacterial Evolution and Its Application for Scheduling A Flexible Manufacturing System

    Directory of Open Access Journals (Sweden)

    Chandramouli Anandaraman

    2012-01-01

    Full Text Available A new evolutionary computation algorithm, Superbug algorithm, which simulates evolution of bacteria in a culture, is proposed. The algorithm is developed for solving large scale optimization problems such as scheduling, transportation and assignment problems. In this work, the algorithm optimizes machine schedules in a Flexible Manufacturing System (FMS by minimizing makespan. The FMS comprises of four machines and two identical Automated Guided Vehicles (AGVs. AGVs are used for carrying jobs between the Load/Unload (L/U station and the machines. Experimental results indicate the efficiency of the proposed algorithm in its optimization performance in scheduling is noticeably superior to other evolutionary algorithms when compared to the best results reported in the literature for FMS Scheduling.

  13. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  14. 2D Tsallis Entropy for Image Segmentation Based on Modified Chaotic Bat Algorithm

    Directory of Open Access Journals (Sweden)

    Zhiwei Ye

    2018-03-01

    Full Text Available Image segmentation is a significant step in image analysis and computer vision. Many entropy based approaches have been presented in this topic; among them, Tsallis entropy is one of the best performing methods. However, 1D Tsallis entropy does not consider make use of the spatial correlation information within the neighborhood results might be ruined by noise. Therefore, 2D Tsallis entropy is proposed to solve the problem, and results are compared with 1D Fisher, 1D maximum entropy, 1D cross entropy, 1D Tsallis entropy, fuzzy entropy, 2D Fisher, 2D maximum entropy and 2D cross entropy. On the other hand, due to the existence of huge computational costs, meta-heuristics algorithms like genetic algorithm (GA, particle swarm optimization (PSO, ant colony optimization algorithm (ACO and differential evolution algorithm (DE are used to accelerate the 2D Tsallis entropy thresholding method. In this paper, considering 2D Tsallis entropy as a constrained optimization problem, the optimal thresholds are acquired by maximizing the objective function using a modified chaotic Bat algorithm (MCBA. The proposed algorithm has been tested on some actual and infrared images. The results are compared with that of PSO, GA, ACO and DE and demonstrate that the proposed method outperforms other approaches involved in the paper, which is a feasible and effective option for image segmentation.

  15. Printing of polymer microcapsules for enzyme immobilization on paper substrate.

    Science.gov (United States)

    Savolainen, Anne; Zhang, Yufen; Rochefort, Dominic; Holopainen, Ulla; Erho, Tomi; Virtanen, Jouko; Smolander, Maria

    2011-06-13

    Poly(ethyleneimine) (PEI) microcapsules containing laccase from Trametes hirsuta (ThL) and Trametes versicolor (TvL) were printed onto paper substrate by three different methods: screen printing, rod coating, and flexo printing. Microcapsules were fabricated via interfacial polycondensation of PEI with the cross-linker sebacoyl chloride, incorporated into an ink, and printed or coated on the paper substrate. The same ink components were used for three printing methods, and it was found that laccase microcapsules were compatible with the ink. Enzymatic activity of microencapsulated TvL was maintained constant in polymer-based ink for at least eight weeks. Thick layers with high enzymatic activity were obtained when laccase-containing microcapsules were screen printed on paper substrate. Flexo printed bioactive paper showed very low activity, since by using this printing method the paper surface was not fully covered by enzyme microcapsules. Finally, screen printing provided a bioactive paper with high water-resistance and the highest enzyme lifetime.

  16. Wind reconstruction algorithm for Viking Lander 1

    Science.gov (United States)

    Kynkäänniemi, Tuomas; Kemppinen, Osku; Harri, Ari-Matti; Schmidt, Walter

    2017-06-01

    The wind measurement sensors of Viking Lander 1 (VL1) were only fully operational for the first 45 sols of the mission. We have developed an algorithm for reconstructing the wind measurement data after the wind measurement sensor failures. The algorithm for wind reconstruction enables the processing of wind data during the complete VL1 mission. The heater element of the quadrant sensor, which provided auxiliary measurement for wind direction, failed during the 45th sol of the VL1 mission. Additionally, one of the wind sensors of VL1 broke down during sol 378. Regardless of the failures, it was still possible to reconstruct the wind measurement data, because the failed components of the sensors did not prevent the determination of the wind direction and speed, as some of the components of the wind measurement setup remained intact for the complete mission. This article concentrates on presenting the wind reconstruction algorithm and methods for validating the operation of the algorithm. The algorithm enables the reconstruction of wind measurements for the complete VL1 mission. The amount of available sols is extended from 350 to 2245 sols.

  17. Wind reconstruction algorithm for Viking Lander 1

    Directory of Open Access Journals (Sweden)

    T. Kynkäänniemi

    2017-06-01

    Full Text Available The wind measurement sensors of Viking Lander 1 (VL1 were only fully operational for the first 45 sols of the mission. We have developed an algorithm for reconstructing the wind measurement data after the wind measurement sensor failures. The algorithm for wind reconstruction enables the processing of wind data during the complete VL1 mission. The heater element of the quadrant sensor, which provided auxiliary measurement for wind direction, failed during the 45th sol of the VL1 mission. Additionally, one of the wind sensors of VL1 broke down during sol 378. Regardless of the failures, it was still possible to reconstruct the wind measurement data, because the failed components of the sensors did not prevent the determination of the wind direction and speed, as some of the components of the wind measurement setup remained intact for the complete mission. This article concentrates on presenting the wind reconstruction algorithm and methods for validating the operation of the algorithm. The algorithm enables the reconstruction of wind measurements for the complete VL1 mission. The amount of available sols is extended from 350 to 2245 sols.

  18. Joint 2D-DOA and Frequency Estimation for L-Shaped Array Using Iterative Least Squares Method

    Directory of Open Access Journals (Sweden)

    Ling-yun Xu

    2012-01-01

    Full Text Available We introduce an iterative least squares method (ILS for estimating the 2D-DOA and frequency based on L-shaped array. The ILS iteratively finds direction matrix and delay matrix, then 2D-DOA and frequency can be obtained by the least squares method. Without spectral peak searching and pairing, this algorithm works well and pairs the parameters automatically. Moreover, our algorithm has better performance than conventional ESPRIT algorithm and propagator method. The useful behavior of the proposed algorithm is verified by simulations.

  19. Feedback stabilization of an l = 0, 1, 2 high-beta stellarator

    International Nuclear Information System (INIS)

    Bartsch, R.R.; Cantrell, E.L.; Gribble, R.F.; Klare, K.A.; Kutac, K.J.; Miller, G.; Quinn, W.E.

    1978-05-01

    Feedback stabilization of the Scyllac 120 0 toroidal sector is reported. The confinement time was increased by 10-20 μs using feedback to a maximum time of 35-45 μs, which is over 10 growth times of the long-wavelength m = 1 instability. These results were obtained after circuits providing flexible waveforms were used to drive auxiliary equilibrium windings. The resultant improved equilibrium agrees well with recent theory. It was observed that normally stable short-wavelength m = 1 modes could be driven unstable by feedback. This instability, caused by local feedback control, increases the feedback system energy consumption. An instability involving direct coupling of the feedback l = 2 field to the plasma l = 1 motion was also observed. The plasma parameters were: temperature, T/sub e/ approximately equal to T 1 approximately equal to 100 eV; density, n/sub e/ approximately equal to 2 x 10 16 cm -8 ; radius, a approximately equal to 1 cm; and β approximately equal to 0.7. Beta decreased significantly in 40 μs, which can be accounted for by classical resistivity and particle loss from the sector ends

  20. Greedy algorithms withweights for construction of partial association rules

    KAUST Repository

    Moshkov, Mikhail; Piliszczu, Marcin; Zielosko, Beata Marta

    2009-01-01

    This paper is devoted to the study of approximate algorithms for minimization of the total weight of attributes occurring in partial association rules. We consider mainly greedy algorithms with weights for construction of rules. The paper contains bounds on precision of these algorithms and bounds on the minimal weight of partial association rules based on an information obtained during the greedy algorithm run.

  1. Greedy algorithms withweights for construction of partial association rules

    KAUST Repository

    Moshkov, Mikhail

    2009-09-10

    This paper is devoted to the study of approximate algorithms for minimization of the total weight of attributes occurring in partial association rules. We consider mainly greedy algorithms with weights for construction of rules. The paper contains bounds on precision of these algorithms and bounds on the minimal weight of partial association rules based on an information obtained during the greedy algorithm run.

  2. A Contextual Fire Detection Algorithm for Simulated HJ-1B Imagery

    Directory of Open Access Journals (Sweden)

    Xiangsheng Kong

    2009-02-01

    Full Text Available The HJ-1B satellite, which was launched on September 6, 2008, is one of the small ones placed in the constellation for disaster prediction and monitoring. HJ-1B imagery was simulated in this paper, which contains fires of various sizes and temperatures in a wide range of terrestrial biomes and climates, including RED, NIR, MIR and TIR channels. Based on the MODIS version 4 contextual algorithm and the characteristics of HJ-1B sensor, a contextual fire detection algorithm was proposed and tested using simulated HJ-1B data. It was evaluated by the probability of fire detection and false alarm as functions of fire temperature and fire area. Results indicate that when the simulated fire area is larger than 45 m2 and the simulated fire temperature is larger than 800 K, the algorithm has a higher probability of detection. But if the simulated fire area is smaller than 10 m2, only when the simulated fire temperature is larger than 900 K, may the fire be detected. For fire areas about 100 m2, the proposed algorithm has a higher detection probability than that of the MODIS product. Finally, the omission and commission error were evaluated which are important factors to affect the performance of this algorithm. It has been demonstrated that HJ-1B satellite data are much sensitive to smaller and cooler fires than MODIS or AVHRR data and the improved capabilities of HJ-1B data will offer a fine opportunity for the fire detection.

  3. English Language Learners' Nonword Repetition Performance: The Influence of Age, L2 Vocabulary Size, Length of L2 Exposure, and L1 Phonology.

    Science.gov (United States)

    Duncan, Tamara Sorenson; Paradis, Johanne

    2016-02-01

    This study examined individual differences in English language learners' (ELLs) nonword repetition (NWR) accuracy, focusing on the effects of age, English vocabulary size, length of exposure to English, and first-language (L1) phonology. Participants were 75 typically developing ELLs (mean age 5;8 [years;months]) whose exposure to English began on average at age 4;4. Children spoke either a Chinese language or South Asian language as an L1 and were given English standardized tests for NWR and receptive vocabulary. Although the majority of ELLs scored within or above the monolingual normal range (71%), 29% scored below. Mixed logistic regression modeling revealed that a larger English vocabulary, longer English exposure, South Asian L1, and older age all had significant and positive effects on ELLs' NWR accuracy. Error analyses revealed the following L1 effect: onset consonants were produced more accurately than codas overall, but this effect was stronger for the Chinese group whose L1s have a more limited coda inventory compared with English. ELLs' NWR performance is influenced by a number of factors. Consideration of these factors is important in deciding whether monolingual norm referencing is appropriate for ELL children.

  4. Cognitive Algorithms for Signal Processing

    Science.gov (United States)

    2011-03-18

    Analysis of Millennial Spiritual Issues,” Zygon, Journal of Science and Religion , 43(4), 797-821, 2008. [46] R. Linnehan, C. Mutz, L.I. Perlovsky, B...dimensions of X and Y : (a) true ‘smile’ and ‘frown’ patterns are shown without clutter; (b) actual image available for recognition (signal is below...clutter in 2 dimensions of X(n) = (X, Y ), is given by l(X(n)|m = clutter) = 1/ (X •  Y ), X = (Xmax-Xmin),  Y = (Ymax-Ymin); (6) 13 Minimal

  5. Correlates of minimal dating.

    Science.gov (United States)

    Leck, Kira

    2006-10-01

    Researchers have associated minimal dating with numerous factors. The present author tested shyness, introversion, physical attractiveness, performance evaluation, anxiety, social skill, social self-esteem, and loneliness to determine the nature of their relationships with 2 measures of self-reported minimal dating in a sample of 175 college students. For women, shyness, introversion, physical attractiveness, self-rated anxiety, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. For men, physical attractiveness, observer-rated social skill, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. The patterns of relationships were not identical for the 2 indicators of minimal dating, indicating the possibility that minimal dating is not a single construct as researchers previously believed. The present author discussed implications and suggestions for future researchers.

  6. Neural changes underlying early stages of L2 vocabulary acquisition.

    Science.gov (United States)

    Pu, He; Holcomb, Phillip J; Midgley, Katherine J

    2016-11-01

    Research has shown neural changes following second language (L2) acquisition after weeks or months of instruction. But are such changes detectable even earlier than previously shown? The present study examines the electrophysiological changes underlying the earliest stages of second language vocabulary acquisition by recording event-related potentials (ERPs) within the first week of learning. Adult native English speakers with no previous Spanish experience completed less than four hours of Spanish vocabulary training, with pre- and post-training ERPs recorded to a backward translation task. Results indicate that beginning L2 learners show rapid neural changes following learning, manifested in changes to the N400 - an ERP component sensitive to lexicosemantic processing and degree of L2 proficiency. Specifically, learners in early stages of L2 acquisition show growth in N400 amplitude to L2 words following learning as well as a backward translation N400 priming effect that was absent pre-training. These results were shown within days of minimal L2 training, suggesting that the neural changes captured during adult second language acquisition are more rapid than previously shown. Such findings are consistent with models of early stages of bilingualism in adult learners of L2 ( e.g. Kroll and Stewart's RHM) and reinforce the use of ERP measures to assess L2 learning.

  7. Robust imaging of localized scatterers using the singular value decomposition and ℓ1 minimization

    International Nuclear Information System (INIS)

    Chai, A; Moscoso, M; Papanicolaou, G

    2013-01-01

    We consider narrow band, active array imaging of localized scatterers in a homogeneous medium with and without additive noise. We consider both single and multiple illuminations and study ℓ 1 minimization-based imaging methods. We show that for large arrays, with array diameter comparable to range, and when scatterers are sparse and well separated, ℓ 1 minimization using a single illumination and without additive noise can recover the location and reflectivity of the scatterers exactly. For multiple illuminations, we introduce a hybrid method which combines the singular value decomposition and ℓ 1 minimization. This method can be used when the essential singular vectors of the array response matrix are available. We show that with this hybrid method we can recover the location and reflectivity of the scatterers exactly when there is no noise in the data. Numerical simulations indicate that the hybrid method is, in addition, robust to noise in the data. We also compare the ℓ 1 minimization-based methods with others including Kirchhoff migration, ℓ 2 minimization and multiple signal classification. (paper)

  8. Fourier-based reconstruction via alternating direction total variation minimization in linear scan CT

    International Nuclear Information System (INIS)

    Cai, Ailong; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Li, Jianxin

    2015-01-01

    In this study, we consider a novel form of computed tomography (CT), that is, linear scan CT (LCT), which applies a straight line trajectory. Furthermore, an iterative algorithm is proposed for pseudo-polar Fourier reconstruction through total variation minimization (PPF-TVM). Considering that the sampled Fourier data are distributed in pseudo-polar coordinates, the reconstruction model minimizes the TV of the image subject to the constraint that the estimated 2D Fourier data for the image are consistent with the 1D Fourier transform of the projection data. PPF-TVM employs the alternating direction method (ADM) to develop a robust and efficient iteration scheme, which ensures stable convergence provided that appropriate parameter values are given. In the ADM scheme, PPF-TVM applies the pseudo-polar fast Fourier transform and its adjoint to iterate back and forth between the image and frequency domains. Thus, there is no interpolation in the Fourier domain, which makes the algorithm both fast and accurate. PPF-TVM is particularly useful for limited angle reconstruction in LCT and it appears to be robust against artifacts. The PPF-TVM algorithm was tested with the FORBILD head phantom and real data in comparisons with state-of-the-art algorithms. Simulation studies and real data verification suggest that PPF-TVM can reconstruct higher accuracy images with lower time consumption

  9. SAR image regularization with fast approximate discrete minimization.

    Science.gov (United States)

    Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc

    2009-07-01

    Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.

  10. Magnetic behavior of MnPS3 phases intercalated by [Zn2L]2+ (LH2: macrocyclic ligand obtained by condensation of 2-hydroxy-5-methyl-1,3-benzenedicarbaldehyde and 1,2-diaminobenzene)

    International Nuclear Information System (INIS)

    Spodine, E.; Valencia-Galvez, P.; Fuentealba, P.; Manzur, J.; Ruiz, D.; Venegas-Yazigi, D.; Paredes-Garcia, V.; Cardoso-Gil, R.; Schnelle, W.; Kniep, R.

    2011-01-01

    The intercalation of the cationic binuclear macrocyclic complex [Zn 2 L] 2+ (LH 2 : macrocyclic ligand obtained by the template condensation of 2-hydroxy-5-methyl-1,3-benzenedicarbaldehyde and 1,2-diaminobenzene) was achieved by a cationic exchange process, using K 0.4 Mn 0.8 PS 3 as a precursor. Three intercalated materials were obtained and characterized: (Zn 2 L) 0.05 K 0.3 Mn 0.8 PS 3 (1), (Zn 2 L) 0.1 K 0.2 Mn 0.8 PS 3 (2) and (Zn 2 L) 0.05 K 0.3 Mn 0.8 PS 3 (3), the latter phase being obtained by an assisted microwave radiation process. The magnetic data permit to estimate the Weiss temperature θ of ∼-130 K for (1); ∼-155 K for (2) and ∼-130 K for (3). The spin canting present in the potassium precursor remains unperturbed in composite (3), and spontaneous magnetization is observed under 50 K in both materials. However composites (1) and (2) do not present this spontaneous magnetization at low temperatures. The electronic properties of the intercalates do not appear to be significantly altered. The reflectance spectra of the intercalated phases (1), (2) and (3) show a gap value between 1.90 and 1.80 eV, lower than the value observed for the K 0.4 Mn 0.8 PS 3 precursor of 2.8 eV. -- Graphical Abstract: Microwave assisted synthesis was used to obtain an intercalated MnPS 3 phase with a binuclear Zn(II) macrocyclic complex. A comparative magnetic study of the composites obtained by assisted microwave and traditional synthetic methods is reported. Display Omitted Highlights: → A rapid and efficient preparation of intercalated MnPS 3 composites by assisted microwave synthesis is described. → The exchange of potassium ions of the precursor by the macrocyclic Zn(II) complex is partial. → The composite obtained by assisted microwave synthesis retains the spontaneous magnetization, observed in the low temperature range of the magnetic susceptibility of the potassium precursor. → The materials obtained by the conventional method loose the spontaneous

  11. Using SETS to find minimal cut sets in large fault trees

    International Nuclear Information System (INIS)

    Worrell, R.B.; Stack, D.W.

    1978-01-01

    An efficient algebraic algorithm for finding the minimal cut sets for a large fault tree was defined and a new procedure which implements the algorithm was added to the Set Equation Transformation System (SETS). The algorithm includes the identification and separate processing of independent subtrees, the coalescing of consecutive gates of the same kind, the creation of additional independent subtrees, and the derivation of the fault tree stem equation in stages. The computer time required to determine the minimal cut sets using these techniques is shown to be substantially less than the computer time required to determine the minimal cut sets when these techniques are not employed. It is shown for a given example that the execution time required to determine the minimal cut sets can be reduced from 7,686 seconds to 7 seconds when all of these techniques are employed

  12. Impact of L1 Use in L2 English Writing Classes

    African Journals Online (AJOL)

    This experimental study endeavored to assess the impact of L1 use in .... Two sections were selected using simple ... practice. In all the four writing practice activities, the experimental group ..... during the pre-test, but this was not true for.

  13. Solution of single linear tridiagonal systems and vectorization of the ICCG algorithm on the Cray 1

    International Nuclear Information System (INIS)

    Kershaw, D.S.

    1981-01-01

    The numerical algorithms used to solve the physics equation in codes which model laser fusion are examined, it is found that a large number of subroutines require the solution of tridiagonal linear systems of equations. One dimensional radiation transport, thermal and suprathermal electron transport, ion thermal conduction, charged particle and neutron transport, all require the solution of tridiagonal systems of equations. The standard algorithm that has been used in the past on CDC 7600's will not vectorize and so cannot take advantage of the large speed increases possible on the Cray-1 through vectorization. There is however, an alternate algorithm for solving tridiagonal systems, called cyclic reduction, which allows for vectorization, and which is optimal for the Cray-1. Software based on this algorithm is now being used in LASNEX to solve tridiagonal linear systems in the subroutines mentioned above. The new algorithm runs as much as five times faster than the standard algorithm on the Cray-1. The ICCG method is being used to solve the diffusion equation with a nine-point coupling scheme on the CDC 7600. In going from the CDC 7600 to the Cray-1, a large part of the algorithm consists of solving tridiagonal linear systems on each L line of the Lagrangian mesh in a manner which is not vectorizable. An alternate ICCG algorithm for the Cray-1 was developed which utilizes a block form of the cyclic reduction algorithm. This new algorithm allows full vectorization and runs as much as five times faster than the old algorithm on the Cray-1. It is now being used in Cray LASNEX to solve the two-dimensional diffusion equation in all the physics subroutines mentioned above

  14. Gamma radiation in the conservation Cucurbita moschata processed minimally

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Lucia C.A.S.; Franco, Suely S.H.; Arthur, Valter, E-mail: lcasilva@cena.usp.br, E-mail: arthur@cena.usp.br [Centro de Energia Nuclear na Agricultura (CENA/USP), Piracicaba, SP (Brazil). Laboratório de Radiobiologia e Ambiente; Harder, Márcia N.C., E-mail: marcia.harder@fatec.sp.gov.br [Faculdade de Tecnologia de Piracicaba (FATEC), Piracicaba, SP (Brazil). Dep. Roque Trevisan; Arthur, Paula B.; Pires, Juliana; Filho, Jorge C., E-mail: paula.arthur@hotmail.com, E-mail: gilmita@uol.com.br, E-mail: juliana.angelo@gmail.com [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The objective of the work was to evaluate the effect of gamma radiation on (Cucurbita moschata) processed minimally. The zucchinis were acquired in of Horticulture Department of the ESALQ/ USP, Piracicaba, SP. Brazil, and taken to the laboratory of Food Irradiation of CENA/USP, where they were washed in running water, peeled and cut in cubes. The squash cubes were dipped in a solution of sodium hypochlorite 15ml/L for 4 minutes and kept in plastic box (polypropylene). They were irradiated with doses of: 0 (control), 1.0 and 2.0 kGy, in a source of Cobalt-60, type Gammacell-220 with a dose rate of 0.666 kGy/h, and stored in temperature of 5°C. After 1, 3 and 7 days of irradiation were realized analyses of: color (factors L, a, b), pH, Brix and acidity. By obtained results conclude that there is not statistics difference between the treatments processed by irradiation and the control. Therefore the dose of 2.0 kGy can be used to reduce the level of microbial load without affects the physical chemical characteristics of minimally processed zucchini. (author)

  15. Gamma radiation in the conservation Cucurbita moschata processed minimally

    International Nuclear Information System (INIS)

    Silva, Lucia C.A.S.; Franco, Suely S.H.; Arthur, Valter

    2017-01-01

    The objective of the work was to evaluate the effect of gamma radiation on (Cucurbita moschata) processed minimally. The zucchinis were acquired in of Horticulture Department of the ESALQ/ USP, Piracicaba, SP. Brazil, and taken to the laboratory of Food Irradiation of CENA/USP, where they were washed in running water, peeled and cut in cubes. The squash cubes were dipped in a solution of sodium hypochlorite 15ml/L for 4 minutes and kept in plastic box (polypropylene). They were irradiated with doses of: 0 (control), 1.0 and 2.0 kGy, in a source of Cobalt-60, type Gammacell-220 with a dose rate of 0.666 kGy/h, and stored in temperature of 5°C. After 1, 3 and 7 days of irradiation were realized analyses of: color (factors L, a, b), pH, Brix and acidity. By obtained results conclude that there is not statistics difference between the treatments processed by irradiation and the control. Therefore the dose of 2.0 kGy can be used to reduce the level of microbial load without affects the physical chemical characteristics of minimally processed zucchini. (author)

  16. A Trust-region-based Sequential Quadratic Programming Algorithm

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....

  17. Lexical statistics of competition in L2 versus L1 listening

    NARCIS (Netherlands)

    Cutler, A.

    2005-01-01

    Spoken-word recognition involves multiple activation of alternative word candidates and competition between these alternatives. Phonemic confusions in L2 listening increase the number of potentially active words, thus slowing word recognition by adding competitors. This study used a 70,000-word

  18. MODIS/Aqua Clouds 5-Min L2 Swath 1km and 5km V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The MODIS/Aqua Clouds 5-Min L2 Swath 1km and 5km (MYD06_L2) product consists of cloud optical and physical parameters. These parameters are derived using remotely...

  19. The quantum group structure of 2D gravity and minimal models. Pt. 1

    International Nuclear Information System (INIS)

    Gervais, J.L.

    1990-01-01

    On the unit circle, an infinite family of chiral operators is constructed, whose exchange algebra is given by the universal R-matrix of the quantum group SL(2) q . This establishes the precise connection between the chiral algebra of two dimensional gravity or minimal models and this quantum group. The method is to relate the monodromy properties of the operator differential equations satisfied by the generalized vertex operators with the exchange algebra of SL(2) q . The formulae so derived, which generalize an earlier particular case worked out by Babelon, are remarkably compact and may be entirely written in terms of 'q-deformed' factorials and binomial coefficients. (orig.)

  20. Minimally Invasive Surgical Treatment of Acute Epidural Hematoma: Case Series

    Directory of Open Access Journals (Sweden)

    Weijun Wang

    2016-01-01

    Full Text Available Background and Objective. Although minimally invasive surgical treatment of acute epidural hematoma attracts increasing attention, no generalized indications for the surgery have been adopted. This study aimed to evaluate the effects of minimally invasive surgery in acute epidural hematoma with various hematoma volumes. Methods. Minimally invasive puncture and aspiration surgery were performed in 59 cases of acute epidural hematoma with various hematoma volumes (13–145 mL; postoperative follow-up was 3 months. Clinical data, including surgical trauma, surgery time, complications, and outcome of hematoma drainage, recovery, and Barthel index scores, were assessed, as well as treatment outcome. Results. Surgical trauma was minimal and surgery time was short (10–20 minutes; no anesthesia accidents or surgical complications occurred. Two patients died. Drainage was completed within 7 days in the remaining 57 cases. Barthel index scores of ADL were ≤40 (n=1, 41–60 (n=1, and >60 (n=55; scores of 100 were obtained in 48 cases, with no dysfunctions. Conclusion. Satisfactory results can be achieved with minimally invasive surgery in treating acute epidural hematoma with hematoma volumes ranging from 13 to 145 mL. For patients with hematoma volume >50 mL and even cerebral herniation, flexible application of minimally invasive surgery would help improve treatment efficacy.

  1. Taxonomies in L1 and L2 Reading Strategies: A Critical Review of Issues Surrounding Strategy-Use Definitions and Classifications in Previous Think-Aloud Research

    Science.gov (United States)

    Alkhaleefah, Tarek A.

    2016-01-01

    Considering the various classifications of L1 and L2 reading strategies in previous think-aloud studies, the present review aims to provide a comprehensive look into those various taxonomies reported in major L1 and L2 reading studies. The rationale for this review is not only to offer a comprehensive overview of the different classifications in…

  2. Minimally inconsistent reasoning in Semantic Web.

    Directory of Open Access Journals (Sweden)

    Xiaowang Zhang

    Full Text Available Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical description logic reasoning.

  3. The Impact of Different Support Vectors on GOSAT-2 CAI-2 L2 Cloud Discrimination

    Directory of Open Access Journals (Sweden)

    Yu Oishi

    2017-11-01

    Full Text Available Greenhouse gases Observing SATellite-2 (GOSAT-2 will be launched in fiscal year 2018. GOSAT-2 will be equipped with two sensors: the Thermal and Near-infrared Sensor for Carbon Observation (TANSO-Fourier Transform Spectrometer 2 (FTS-2 and the TANSO-Cloud and Aerosol Imager 2 (CAI-2. CAI-2 is a push-broom imaging sensor that has forward- and backward-looking bands to observe the optical properties of aerosols and clouds and to monitor the status of urban air pollution and transboundary air pollution over oceans, such as PM2.5 (particles less than 2.5 micrometers in diameter. CAI-2 has important applications for cloud discrimination in each direction. The Cloud and Aerosol Unbiased Decision Intellectual Algorithm (CLAUDIA1, which applies sequential threshold tests to features is used for GOSAT CAI L2 cloud flag processing. If CLAUDIA1 is used with CAI-2, it is necessary to optimize the thresholds in accordance with CAI-2. However, CLAUDIA3 with support vector machines (SVM, a supervised pattern recognition method, was developed, and then we applied CLAUDIA3 for GOSAT-2 CAI-2 L2 cloud discrimination processing. Thus, CLAUDIA3 can automatically find the optimized boundary between clear and cloudy areas. Improvements in CLAUDIA3 using CAI (CLAUDIA3-CAI continue to be made. In this study, we examined the impact of various support vectors (SV on GOSAT-2 CAI-2 L2 cloud discrimination by analyzing (1 the impact of the choice of different time periods for the training data and (2 the impact of different generation procedures for SV on the cloud discrimination efficiency. To generate SV for CLAUDIA3-CAI from MODIS data, there are two times at which features are extracted, corresponding to CAI bands. One procedure is equivalent to generating SV using CAI data. Another procedure generates SV for MODIS cloud discrimination at the beginning, and then extracts decision function, thresholds, and SV corresponding to CAI bands. Our results indicated the following

  4. Reexamination of M2,3 atomic level widths and L1M2,3 transition energies of elements 69≤Z≤95

    Science.gov (United States)

    Fennane, K.; Berset, M.; Dousse, J.-Cl.; Hoszowska, J.; Raboud, P.-A.; Campbell, J. L.

    2013-11-01

    We report on high-resolution measurements of the photoinduced L1M2 and L1M3 x-ray emission lines of 69Tm, 70Yb, 71Lu, 73Ta, 74W, 75Re, 77Ir, 81Tl, 83Bi, and 95Am. From the linewidths of the measured transitions an accurate set of M2 and M3 level widths is determined assuming for the L1 level widths the values reported by Raboud [P.-A. Raboud et al., Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.65.022512 65, 022512 (2002)]. Furthermore, the present experimental M2,3 data set is extended to 80Hg, 90Th, and 92U, using former L1M2,3 high-resolution x-ray emission spectroscopy measurements performed by our group. A detailed comparison of the M2 and M3 level widths determined in the present work with those recommended by Campbell and Papp [J. L. Campbell and T. Papp, At. Data Nucl. Data TablesADNDAT0092-640X10.1006/adnd.2000.0848 77, 1 (2001)] and other available experimental data as well as theoretical predictions is done. The observed abrupt changes of the M2,3 level widths versus atomic number Z can be explained satisfactorily by the cutoffs and onsets of the M2M4N1, respectively M3M4N3,4,5 and M3M5N2,3 Coster-Kronig transitions deduced from the semiempirical (Z+1) approximation. As a spin-off result of this study, precise L1M2 and L1M3 transition energies are obtained for the investigated elements. A very good agreement with transition energies calculated within the many-body perturbation theory is found.

  5. Evaluation du test rapide oral aware™ omt HIV 1/2 pour le ...

    African Journals Online (AJOL)

    Chaque participant a fourni un échantillon de fluide oral pour la réalisation du test Aware™ OMT HIV-1/2 et du sang testé suivant l'algorithme séquentiel de tests ELISAs Murex® HIV-1.2.0 (Laboratoires Abbott, Japon) et Test ELISA peptidique maison du CeDReS. Résultats : la sensibilité, la spécificité, la Valeur Prédictive ...

  6. Fast algorithms for chiral fermions in 2 dimensions

    Directory of Open Access Journals (Sweden)

    Hyka (Xhako Dafina

    2018-01-01

    Full Text Available In lattice QCD simulations the formulation of the theory in lattice should be chiral in order that symmetry breaking happens dynamically from interactions. In order to guarantee this symmetry on the lattice one uses overlap and domain wall fermions. On the other hand high computational cost of lattice QCD simulations with overlap or domain wall fermions remains a major obstacle of research in the field of elementary particles. We have developed the preconditioned GMRESR algorithm as fast inverting algorithm for chiral fermions in U(1 lattice gauge theory. In this algorithm we used the geometric multigrid idea along the extra dimension.The main result of this work is that the preconditioned GMRESR is capable to accelerate the convergence 2 to 12 times faster than the other optimal algorithms (SHUMR for different coupling constant and lattice 32x32. Also, in this paper we tested it for larger lattice size 64x64. From the results of simulations we can see that our algorithm is faster than SHUMR. This is a very promising result that this algorithm can be adapted also in 4 dimension.

  7. Nonsmooth Optimization Algorithms, System Theory, and Software Tools

    Science.gov (United States)

    1993-04-13

    Optimization Algorithms, System Theory , and Scftware Tools" AFOSR-90-OO68 L AUTHOR($) Elijah Polak -Professor and Principal Investigator 7. PERFORMING...NSN 754Q-01-2W0-S500 Standard Form 295 (69O104 Draft) F’wsa*W by hA Sit 230.1""V AFOSR-90-0068 NONSMO0 TH OPTIMIZA TION A L GORI THMS, SYSTEM THEORY , AND

  8. Learner-Generated Noticing Behavior by Novice Learners: Tracing the Effects of Learners' L1 on Their Emerging L2

    Science.gov (United States)

    Park, Eun Sung

    2013-01-01

    This study examines novice learners' self-generated input noticing approaches and strategies. It is motivated by previous research on input enhancement which yielded insights that learners are naturally prone to notice certain aspects of L2 input on their own without any external means to channel their attention. Two L1 groups (Japanese and…

  9. FIMP and muon (g−2) in a U(1){sub L{sub μ−L{sub τ}}} model

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Anirban [Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019 (India); Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094 (India); Choubey, Sandhya [Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019 (India); Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094 (India); Department of Theoretical Physics, School of Engineering Sciences, KTH Royal Institute of Technology, AlbaNova University Center, 106 91 Stockholm (Sweden); Khan, Sarif [Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019 (India); Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094 (India)

    2017-02-23

    The tightening of the constraints on the standard thermal WIMP scenario has forced physicists to propose alternative dark matter (DM) models. One of the most popular alternate explanations of the origin of DM is the non-thermal production of DM via freeze-in. In this scenario the DM never attains thermal equilibrium with the thermal soup because of its feeble coupling strength (∼ 10{sup −12}) with the other particles in the thermal bath and is generally called the Feebly Interacting Massive Particle (FIMP). In this work, we present a gauged U(1){sub L{sub μ−L{sub τ}}} extension of the Standard Model (SM) which has a scalar FIMP DM candidate and can consistently explain the DM relic density bound. In addition, the spontaneous breaking of the U(1){sub L{sub μ−L{sub τ}}} gauge symmetry gives an extra massive neutral gauge boson Z{sub μτ} which can explain the muon (g−2) data through its additional one-loop contribution to the process. Lastly, presence of three right-handed neutrinos enable the model to successfully explain the small neutrino masses via the Type-I seesaw mechanism. The presence of the spontaneously broken U(1){sub L{sub μ−L{sub τ}}} gives a particular structure to the light neutrino mass matrix which can explain the peculiar mixing pattern of the light neutrinos.

  10. A look-ahead variant of the Lanczos algorithm and its application to the quasi-minimal residual method for non-Hermitian linear systems. Ph.D. Thesis - Massachusetts Inst. of Technology, Aug. 1991

    Science.gov (United States)

    Nachtigal, Noel M.

    1991-01-01

    The Lanczos algorithm can be used both for eigenvalue problems and to solve linear systems. However, when applied to non-Hermitian matrices, the classical Lanczos algorithm is susceptible to breakdowns and potential instabilities. In addition, the biconjugate gradient (BCG) algorithm, which is the natural generalization of the conjugate gradient algorithm to non-Hermitian linear systems, has a second source of breakdowns, independent of the Lanczos breakdowns. Here, we present two new results. We propose an implementation of a look-ahead variant of the Lanczos algorithm which overcomes the breakdowns by skipping over those steps where a breakdown or a near-breakdown would occur. The new algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products per step as the classical Lanczos algorithm without look-ahead. Based on the proposed look-ahead Lanczos algorithm, we then present a novel BCG-like approach, the quasi-minimal residual (QMR) method, which avoids the second source of breakdowns in the BCG algorithm. We present details of the new method and discuss some of its properties. In particular, we discuss the relationship between QMR and BCG, showing how one can recover the BCG iterates, when they exist, from the QMR iterates. We also present convergence results for QMR, showing the connection between QMR and the generalized minimal residual (GMRES) algorithm, the optimal method in this class of methods. Finally, we give some numerical examples, both for eigenvalue computations and for non-Hermitian linear systems.

  11. Improving Generalization Based on l1-Norm Regularization for EEG-Based Motor Imagery Classification

    Directory of Open Access Journals (Sweden)

    Yuwei Zhao

    2018-05-01

    Full Text Available Multichannel electroencephalography (EEG is widely used in typical brain-computer interface (BCI systems. In general, a number of parameters are essential for a EEG classification algorithm due to redundant features involved in EEG signals. However, the generalization of the EEG method is often adversely affected by the model complexity, considerably coherent with its number of undetermined parameters, further leading to heavy overfitting. To decrease the complexity and improve the generalization of EEG method, we present a novel l1-norm-based approach to combine the decision value obtained from each EEG channel directly. By extracting the information from different channels on independent frequency bands (FB with l1-norm regularization, the method proposed fits the training data with much less parameters compared to common spatial pattern (CSP methods in order to reduce overfitting. Moreover, an effective and efficient solution to minimize the optimization object is proposed. The experimental results on dataset IVa of BCI competition III and dataset I of BCI competition IV show that, the proposed method contributes to high classification accuracy and increases generalization performance for the classification of MI EEG. As the training set ratio decreases from 80 to 20%, the average classification accuracy on the two datasets changes from 85.86 and 86.13% to 84.81 and 76.59%, respectively. The classification performance and generalization of the proposed method contribute to the practical application of MI based BCI systems.

  12. L2 English Intonation: Relations between Form-Meaning Associations, Access to Meaning, and L1 Transfer

    Science.gov (United States)

    Ortega-Llebaria, Marta; Colantoni, Laura

    2014-01-01

    Although there is consistent evidence that higher levels of processing, such as learning the form-meaning associations specific to the second language (L2), are a source of difficulty in acquiring L2 speech, no study has addressed how these levels interact in shaping L2 perception and production of intonation. We examine the hypothesis of whether…

  13. Standardization and performance evaluation of "modified" and "ultrasensitive" versions of the Abbott RealTime HIV-1 assay, adapted to quantify minimal residual viremia.

    Science.gov (United States)

    Amendola, Alessandra; Bloisi, Maria; Marsella, Patrizia; Sabatini, Rosella; Bibbò, Angela; Angeletti, Claudio; Capobianchi, Maria Rosaria

    2011-09-01

    Numerous studies investigating clinical significance of HIV-1 minimal residual viremia (MRV) suggest potential utility of assays more sensitive than those routinely used to monitor viral suppression. However currently available methods, based on different technologies, show great variation in detection limit and input plasma volume, and generally suffer from lack of standardization. In order to establish new tools suitable for routine quantification of minimal residual viremia in patients under virological suppression, some modifications were introduced into standard procedure of the Abbott RealTime HIV-1 assay leading to a "modified" and an "ultrasensitive" protocols. The following modifications were introduced: calibration curve extended towards low HIV-1 RNA concentration; 4 fold increased sample volume by concentrating starting material; reduced volume of internal control; adoption of "open-mode" software for quantification. Analytical performances were evaluated using the HIV-1 RNA Working Reagent 1 for NAT assays (NIBSC). Both tests were applied to clinical samples from virologically suppressed patients. The "modified" and the "ultrasensitive" configurations of the assay reached a limit of detection of 18.8 (95% CI: 11.1-51.0 cp/mL) and 4.8 cp/mL (95% CI: 2.6-9.1 cp/mL), respectively, with high precision and accuracy. In clinical samples from virologically suppressed patients, "modified" and "ultrasensitive" protocols allowed to detect and quantify HIV RNA in 12.7% and 46.6%, respectively, of samples resulted "not-detectable", and in 70.0% and 69.5%, respectively, of samples "detected laboratories for measuring MRV. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Detection of high PD-L1 expression in oral cancers by a novel monoclonal antibody L1Mab-4.

    Science.gov (United States)

    Yamada, Shinji; Itai, Shunsuke; Kaneko, Mika K; Kato, Yukinari

    2018-03-01

    Programmed cell death-ligand 1 (PD-L1), which is a ligand of programmed cell death-1 (PD-1), is a type I transmembrane glycoprotein that is expressed on antigen-presenting cells and several tumor cells, including melanoma and lung cancer cells. There is a strong correlation between human PD-L1 (hPD-L1) expression on tumor cells and negative prognosis in cancer patients. In this study, we produced a novel anti-hPD-L1 monoclonal antibody (mAb), L 1 Mab-4 (IgG 2b , kappa), using cell-based immunization and screening (CBIS) method and investigated hPD-L1 expression in oral cancers. L 1 Mab-4 reacted with oral cancer cell lines (Ca9-22, HO-1-u-1, SAS, HSC-2, HSC-3, and HSC-4) in flow cytometry and stained oral cancers in a membrane-staining pattern. L 1 Mab-4 stained 106/150 (70.7%) of oral squamous cell carcinomas, indicating the very high sensitivity of L 1 Mab-4. These results indicate that L 1 Mab-4 could be useful for investigating the function of hPD-L1 in oral cancers.

  15. Versatility of {l_brace}M(30-crown-10){r_brace} (M = K{sup +}, Ba{sup 2+}) as a guest in UO{sub 2}{sup 2+} complexes of 3.1.3.1 - and 3.3.3 homo-oxa-calixarenes

    Energy Technology Data Exchange (ETDEWEB)

    Masci, B. [Univ Roma La Sapienza, Dipartimento Chim, I-00185 Rome, (Italy); Thuery, P. [CEA Saclay, DSM/DRECAM/SCM, CNRS-URA 331, F-91191 Gif Sur Yvette, (France)

    2007-07-01

    The reaction between p-R-[3.1.3.1]- or [3.3.3] homo-oxa-calixarenes and uranyl salts in the presence of 30-crown-10 and the alkali or alkaline-earth metal cations K{sup +} or Ba{sup 2+} gives various supramolecular assemblages characterized by 'complex-within-complex' architectures. These can be of the simple nesting or sandwich types, as in [{l_brace}Ba(30-crown-10){r_brace}{l_brace}UO{sub 2}(L{sup 1}){r_brace}]. 2H{sub 2}O.3CHCl{sub 3} (L{sup 1}H{sub 4} p-tert-butyl[3.1.3.1] homo-oxa-calixarene) and [{l_brace}Ba(30-crown-10){r_brace}{l_brace}UO{sub 2}(L{sup 4}){r_brace}{sub 2}].2CHCl{sub 3} (L{sup 4}H{sub 3} p-bromo[3.3.3]homo-oxa-calixarene), respectively, with the cation held in the cavity of the homo-oxa-calixarene complexes in cone conformation by weak interactions, but more original structures arise when uranyl-cation bonds are present. In [{l_brace}Ba(30-crown-10){r_brace}{l_brace}UO{sub 2}(L{sup 2}){r_brace}] (L{sup 2}H{sub 4} p-phenyl[3.1.3.1] homo-oxa-calixarene), the barium ion included in the crown ether is bound to the uranyl oxo group located out of the calixarene cavity, resulting in the formation of a neutral species which self-organizes to form a columnar assembly by auto-inclusion. In [{l_brace}K(30-crown-10){r_brace}{l_brace}UO{sub 2}K(L{sup 1})(H{sub 2}O){sub 3}{r_brace}]{sub 2}.6H{sub 2}O, the nesting-type subunit dimerizes around two oxo-bound potassium ions. Finally, the use of the coordinating solvent dimethylsulfoxide leads to the neutral complex [UO{sub 2}Ba(L{sup 3})(dmso){sub 2}(MeOH)]{sub 2} (L{sup 3}H{sub 4} = p-methyl[3.1.3.1] homo-oxa-calixarene), in which the crown ether is absent and two oxo-, phenoxo- and ether-bound barium atoms ensure the dimerization of the uranyl complex. (authors)

  16. Stereoselective chemoenzymatic synthesis of the four stereoisomers of l-2-(2-carboxycyclobutyl)glycine and pharmacological characterization at human excitatory amino acid transporter subtypes 1, 2, and 3

    DEFF Research Database (Denmark)

    Faure, Sophie; Jensen, Anders A.; Maurat, Vincent

    2006-01-01

    The four stereoisomers of l-2-(2-carboxycyclobutyl)glycine, l-CBG-I, l-CBG-II, l-CBG-III, and l-CBG-IV, were synthesized in good yield and high enantiomeric excess, from the corresponding cis and trans-2-oxalylcyclobutanecarboxylic acids 5 and 6 using the enzymes aspartate aminotransferase (AAT......) and branched chain aminotransferase (BCAT) from Escherichia coli. The four stereoisomeric compounds were evaluated as potential ligands for the human excitatory amino acid transporters, subtypes 1, 2, and 3 (EAAT1, EAAT2, and EAAT3) in the FLIPR membrane potential assay. While the one trans-stereoisomer, l...

  17. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  18. The new 'Earth Dreams Technology i-DTEC' 1.6 l diesel engine from Honda

    Energy Technology Data Exchange (ETDEWEB)

    Yamano, J.; Ikoma, K.; Matsui, R.; Ikegami, N.; Mori, S.; Yano, T. [Honda R and D Co., Ltd., Tochigi (Japan)

    2013-08-01

    Honda has developed a 3rd-generation diesel engine, seeking to balance further CO{sub 2} reductions with dynamic performance. This development focused on downsizing the engine and succeeded in developing a compact, lightweight and high-efficiency 1.6 L in-line 4-cylinder turbocharged i-DTEC diesel engine. Optimization of engine rigidity in the newly developed 1.6 L diesel engine has made it possible to use an aluminum cylinder block with an open-deck structure. Furthermore, weight could be reduced by means of an efficient structure and engine layout. In addition, mechanical friction has been minimized via reducing weight of the reciprocating components and downsizing auxiliary equipment. These innovations made it possible for the engine to achieve the same level of friction as a Honda petrol engine of the same displacement. Thermal management has also been optimized by enhancement of the engine cooling system. In addition, low-pressure loop exhaust gas recirculation (LP-EGR) was applied to achieve increased thermal efficiency. These measures have helped the engine to realize a high level of boost and high EGR, increasing fuel efficiency and reducing emissions across a wide range of operating conditions. Like the 2.2 L model, the Civic fitted with this 1.6 L diesel engine uses idle-stop and deceleration energy regeneration control. With all these measures, the Civic achieved CO{sub 2} emissions of 94 g/km (3.6 L/100km) in NEDC, a reduction of 14.5% in CO{sub 2} emissions against the 110 g/km recorded by the 2.2 L model. (orig.)

  19. A Novel Integrated Algorithm for Wind Vector Retrieval from Conically Scanning Scatterometers

    Directory of Open Access Journals (Sweden)

    Xuetong Xie

    2013-11-01

    Full Text Available Due to the lower efficiency and the larger wind direction error of traditional algorithms, a novel integrated wind retrieval algorithm is proposed for conically scanning scatterometers. The proposed algorithm has the dual advantages of less computational cost and higher wind direction retrieval accuracy by integrating the wind speed standard deviation (WSSD algorithm and the wind direction interval retrieval (DIR algorithm. It adopts wind speed standard deviation as a criterion for searching possible wind vector solutions and retrieving a potential wind direction interval based on the change rate of the wind speed standard deviation. Moreover, a modified three-step ambiguity removal method is designed to let more wind directions be selected in the process of nudging and filtering. The performance of the new algorithm is illustrated by retrieval experiments using 300 orbits of SeaWinds/QuikSCAT L2A data (backscatter coefficients at 25 km resolution and co-located buoy data. Experimental results indicate that the new algorithm can evidently enhance the wind direction retrieval accuracy, especially in the nadir region. In comparison with the SeaWinds L2B Version 2 25 km selected wind product (retrieved wind fields, an improvement of 5.1° in wind direction retrieval can be made by the new algorithm for that region.

  20. Student Teachers' Cognition about L2 Pronunciation Instruction: A Case Study

    Science.gov (United States)

    Burri, Michael

    2015-01-01

    In view of the minimal attention pronunciation teacher preparation has received in second language (L2) teacher education, this study examined the cognition (i.e. beliefs, thoughts, attitudes and knowledge) development of 15 student teachers during a postgraduate subject on pronunciation pedagogy offered at an Australian tertiary institution.…

  1. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  2. Decision Tree Algorithm-Generated Single-Nucleotide Polymorphism Barcodes of rbcL Genes for 38 Brassicaceae Species Tagging.

    Science.gov (United States)

    Yang, Cheng-Hong; Wu, Kuo-Chuan; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2018-01-01

    DNA barcode sequences are accumulating in large data sets. A barcode is generally a sequence larger than 1000 base pairs and generates a computational burden. Although the DNA barcode was originally envisioned as straightforward species tags, the identification usage of barcode sequences is rarely emphasized currently. Single-nucleotide polymorphism (SNP) association studies provide us an idea that the SNPs may be the ideal target of feature selection to discriminate between different species. We hypothesize that SNP-based barcodes may be more effective than the full length of DNA barcode sequences for species discrimination. To address this issue, we tested a r ibulose diphosphate carboxylase ( rbcL ) S NP b arcoding (RSB) strategy using a decision tree algorithm. After alignment and trimming, 31 SNPs were discovered in the rbcL sequences from 38 Brassicaceae plant species. In the decision tree construction, these SNPs were computed to set up the decision rule to assign the sequences into 2 groups level by level. After algorithm processing, 37 nodes and 31 loci were required for discriminating 38 species. Finally, the sequence tags consisting of 31 rbcL SNP barcodes were identified for discriminating 38 Brassicaceae species based on the decision tree-selected SNP pattern using RSB method. Taken together, this study provides the rational that the SNP aspect of DNA barcode for rbcL gene is a useful and effective sequence for tagging 38 Brassicaceae species.

  3. Robust Discontinuity Preserving Optical Flow Methods

    Directory of Open Access Journals (Sweden)

    Nelson Monzón

    2016-11-01

    Full Text Available In this work, we present an implementation of discontinuity-preserving strategies in TV-L1 optical flow methods. These are based on exponential functions that mitigate the regularization at image edges, which usually provide precise flow boundaries. Nevertheless, if the smoothing is not well controlled, it may produce instabilities in the computed motion fields. We present an algorithm that allows three regularization strategies: the first one uses an exponential function together with a TV process; the second one combines this strategy with a small constant that ensures a minimum isotropic smoothing; the third one is a fully automatic approach that adapts the diffusion depending on the histogram of the image gradients. The last two alternatives are aimed at reducing the effect of instabilities. In the experiments, we observe that the pure exponential function is highly unstable while the other strategies preserve accurate motion contours for a large range of parameters.

  4. JNK1 Controls Dendritic Field Size in L2/3 and L5 of the Motor Cortex, Constrains Soma Size and Influences Fine Motor Coordination

    Directory of Open Access Journals (Sweden)

    Emilia eKomulainen

    2014-09-01

    Full Text Available Genetic anomalies on the JNK pathway confer susceptibility to autism spectrum disorders, schizophrenia and intellectual disability. The mechanism whereby a gain or loss of function in JNK signaling predisposes to these prevalent dendrite disorders, with associated motor dysfunction, remains unclear. Here we find that JNK1 regulates the dendritic field of L2/3 and L5 pyramidal neurons of the mouse motor cortex (M1, the main excitatory pathway controlling voluntary movement. In Jnk1-/- mice, basal dendrite branching of L5 pyramidal neurons is increased in M1, as is cell soma size, whereas in L2/3, dendritic arborization is decreased. We show that JNK1 phosphorylates rat HMW-MAP2 on T1619, T1622 and T1625 (Uniprot P15146 corresponding to mouse T1617, T1620, T1623, to create a binding motif, that is critical for MAP2 interaction with and stabilization of microtubules, and dendrite growth control. Targeted expression in M1 of GFP-HMW-MAP2 that is pseudo-phosphorylated on T1619, T1622 and T1625 increases dendrite complexity in L2/3 indicating that JNK1 phosphorylation of HMW-MAP2 regulates the dendritic field. Consistent with the morphological changes observed in L2/3 and L5, Jnk1-/- mice exhibit deficits in limb placement and motor coordination, while stride length is reduced in older animals. In summary, JNK1 phosphorylates HMW-MAP2 to increase its stabilization of microtubules while at the same time controlling dendritic fields in the main excitatory pathway of M1. Moreover, JNK1 contributes to normal functioning of fine motor coordination. We report for the first time, a quantitative sholl analysis of dendrite architecture, and of motor behavior in Jnk1-/- mice. Our results illustrate the molecular and behavioral consequences of interrupted JNK1 signaling and provide new ground for mechanistic understanding of those prevalent neuropyschiatric disorders where genetic disruption of the JNK pathway is central.

  5. Neurocognitive Development and Predictors of L1 and L2 Literacy Skills in Dyslexia: A Longitudinal Study of Children 5-11 Years Old.

    Science.gov (United States)

    Helland, Turid; Morken, Frøydis

    2016-02-01

    The aim of this study was to find valid neurocognitive precursors of literacy development in first language (L1, Norwegian) and second language (L2, English) in a group of children during their Pre-literacy, Emergent Literacy and Literacy stages, by comparing children with dyslexia and a typical group. Children who were 5 years old at project start were followed until the age of 11, when dyslexia was identified and data could be analysed in retrospect. The children's neurocognitive pattern changed both by literacy stage and domain. Visuo-spatial recall and RAN appeared as early precursors of L1 literacy, while phonological awareness appeared as early precursor of L2 English. Verbal long term memory was associated with both L1 and L2 skills in the Literacy stage. Significant group differences seen in the Pre-literacy and Emergent literacy stages decreased in the Literacy stage. The developmental variations by stage and domain may explain some of the inconsistencies seen in dyslexia research. Early identification and training are essential to avoid academic failure, and our data show that visuo-spatial memory and RAN could be suitable early markers in transparent orthographies like Norwegian. Phonological awareness was here seen as an early precursor of L2 English, but not of L1 Norwegian. © 2015 The Authors. Dyslexia published by John Wiley & Sons Ltd.

  6. Detection of high PD-L1 expression in oral cancers by a novel monoclonal antibody L1Mab-4

    Directory of Open Access Journals (Sweden)

    Shinji Yamada

    2018-03-01

    Full Text Available Programmed cell death-ligand 1 (PD-L1, which is a ligand of programmed cell death-1 (PD-1, is a type I transmembrane glycoprotein that is expressed on antigen-presenting cells and several tumor cells, including melanoma and lung cancer cells. There is a strong correlation between human PD-L1 (hPD-L1 expression on tumor cells and negative prognosis in cancer patients. In this study, we produced a novel anti-hPD-L1 monoclonal antibody (mAb, L1Mab-4 (IgG2b, kappa, using cell-based immunization and screening (CBIS method and investigated hPD-L1 expression in oral cancers. L1Mab-4 reacted with oral cancer cell lines (Ca9-22, HO-1-u-1, SAS, HSC-2, HSC-3, and HSC-4 in flow cytometry and stained oral cancers in a membrane-staining pattern. L1Mab-4 stained 106/150 (70.7% of oral squamous cell carcinomas, indicating the very high sensitivity of L1Mab-4. These results indicate that L1Mab-4 could be useful for investigating the function of hPD-L1 in oral cancers. Keywords: Programmed cell death-ligand 1, Monoclonal antibody, Oral cancer

  7. The zebrafish galectins Drgal1-L2 and Drgal3-L1 bind in vitro to the infectious hematopoietic necrosis virus (IHNV) glycoprotein and reduce viral adhesion to fish epithelial cells

    Digital Repository Service at National Institute of Oceanography (India)

    Nita-Lazar, M.; Mancini, J.; Feng, C.; Gonzalez-Montalban, N.; Ravindran, C.; Jackson, S.; Heras-Sanchez, A.D.L.; Giomarelli, B.; Ahmed, H.; Haslam, S.M.; Wu, G.; Dell, A.; Ammayappan, A.; Vakharia, V.N.; Vasta, G.R.

    galectin-3 (Drgal3-L1) in IHNV adhesion to epithelial cells. Our results suggest that the extracellular Drgal1-L2 and Drgal3-L1 interact directly and in a carbohydrate-dependent manner with the IHNV glycosylated envelope and glycans on the epithelial cell...

  8. Proton induced L{sub 1}, L{sub 2}, L{sub 3}-sub-shell X-ray production cross sections of Hf and Au

    Energy Technology Data Exchange (ETDEWEB)

    Bertol, A.P.L., E-mail: anapaula.bertol@gmail.com [Programa de Pós-graduação em Física, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil); Hinrichs, R. [Programa de Pós-graduação em Física, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil); Instituto de Geociências, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil); Vasconcellos, M.A.Z. [Programa de Pós-graduação em Física, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil); Instituto de Física, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil)

    2015-11-15

    Experimental data for proton induced X-ray production cross sections of L-sub-shells of Hf and Au were obtained, in order to contribute to the existing data sets and to support refinements of the ECPSSR theory. X-ray emissions of mono-elemental 10 nm films of Hf and Au were excited with 0.7–1.5 MeV protons. The measured L-line spectra were fitted assuming Gaussian shapes and constraining peak positions and line widths. The transition energy values used to establish peak positions were based on values proposed in the literature, while the line widths were matched to the detector resolution, obtained independently. The intensity of each line was obtained from the adjusted areas, without the use of emission rates. The line intensities were summed in α, β, γ, and ℓ groups to validate the measurements by comparison with existing data, and in L{sub 1}, L{sub 2}, L{sub 3}-sub-shells for comparison with ECPSSR-UA theory. Ratios of the emission rates of lines of the same sub-shell were obtained and compared with the data from the literature.

  9. Motion compensated frame interpolation with a symmetric optical flow constraint

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Roholm, Lars; Bruhn, Andrés

    2012-01-01

    We consider the problem of interpolating frames in an image sequence. For this purpose accurate motion estimation can be very helpful. We propose to move the motion estimation from the surrounding frames directly to the unknown frame by parametrizing the optical flow objective function such that ......We consider the problem of interpolating frames in an image sequence. For this purpose accurate motion estimation can be very helpful. We propose to move the motion estimation from the surrounding frames directly to the unknown frame by parametrizing the optical flow objective function...... methods. The proposed reparametrization is generic and can be applied to almost every existing algorithm. In this paper we illustrate its advantages by considering the classic TV-L1 optical flow algorithm as a prototype. We demonstrate that this widely used method can produce results that are competitive...... with current state-of-the-art methods. Finally we show that the scheme can be implemented on graphics hardware such that it be- comes possible to double the frame rate of 640 × 480 video footage at 30 fps, i.e. to perform frame doubling in realtime....

  10. SMOS/SMAP Synergy for SMAP Level 2 Soil Moisture Algorithm Evaluation

    Science.gov (United States)

    Bindlish, Rajat; Jackson, Thomas J.; Zhao, Tianjie; Cosh, Michael; Chan, Steven; O'Neill, Peggy; Njoku, Eni; Colliander, Andreas; Kerr, Yann

    2011-01-01

    Soil Moisture Active Passive (SMAP) satellite has been proposed to provide global measurements of soil moisture and land freeze/thaw state at 10 km and 3 km resolutions, respectively. SMAP would also provide a radiometer-only soil moisture product at 40-km spatial resolution. This product and the supporting brightness temperature observations are common to both SMAP and European Space Agency's Soil Moisture and Ocean Salinity (SMOS) mission. As a result, there are opportunities for synergies between the two missions. These include exploiting the data for calibration and validation and establishing longer term L-band brightness temperature and derived soil moisture products. In this investigation we will be using SMOS brightness temperature, ancillary data, and soil moisture products to develop and evaluate a candidate SMAP L2 passive soil moisture retrieval algorithm. This work will begin with evaluations based on the SMOS product grids and ancillary data sets and transition to those that will be used by SMAP. An important step in this analysis is reprocessing the multiple incidence angle observations provided by SMOS to a global brightness temperature product that simulates the constant 40 degree incidence angle observations that SMAP will provide. The reprocessed brightness temperature data provide a basis for evaluating different SMAP algorithm alternatives. Several algorithms are being considered for the SMAP radiometer-only soil moisture retrieval. In this first phase, we utilized only the Single Channel Algorithm (SCA), which is based on the radiative transfer equation and uses the channel that is most sensitive to soil moisture (H-pol). Brightness temperature is corrected sequentially for the effects of temperature, vegetation, roughness (dynamic ancillary data sets) and soil texture (static ancillary data set). European Centre for Medium-Range Weather Forecasts (ECMWF) estimates of soil temperature for the top layer (as provided as part of the SMOS

  11. SU-G-JeP1-07: Development of a Programmable Motion Testbed for the Validation of Ultrasound Tracking Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, A; Matrosic, C; Zagzebski, J; Bednarz, B [University of Wisconsin, Madison, WI (United States)

    2016-06-15

    Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulated motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded by NIH

  12. SU-G-JeP1-07: Development of a Programmable Motion Testbed for the Validation of Ultrasound Tracking Algorithms

    International Nuclear Information System (INIS)

    Shepard, A; Matrosic, C; Zagzebski, J; Bednarz, B

    2016-01-01

    Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulated motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded by NIH

  13. Alterations of the tunica vasculosa lentis in the rat model of retinopathy of prematurity.

    Science.gov (United States)

    Favazza, Tara L; Tanimoto, Naoyuki; Munro, Robert J; Beck, Susanne C; Garcia Garrido, Marina; Seide, Christina; Sothilingam, Vithiyanjali; Hansen, Ronald M; Fulton, Anne B; Seeliger, Mathias W; Akula, James D

    2013-08-01

    To study the relationship between retinal and tunica vasculosa lentis (TVL) disease in retinopathy of prematurity (ROP). Although the clinical hallmark of ROP is abnormal retinal blood vessels, the vessels of the anterior segment, including the TVL, are also altered. ROP was induced in Long-Evans pigmented and Sprague Dawley albino rats; room-air-reared (RAR) rats served as controls. Then, fluorescein angiographic images of the TVL and retinal vessels were serially obtained with a scanning laser ophthalmoscope near the height of retinal vascular disease, ~20 days of age, and again at 30 and 64 days of age. Additionally, electroretinograms (ERGs) were obtained prior to the first imaging session. The TVL images were analyzed for percent coverage of the posterior lens. The tortuosity of the retinal arterioles was determined using Retinal Image multiScale Analysis (Gelman et al. in Invest Ophthalmol Vis Sci 46:4734-4738, 2005). In the youngest ROP rats, the TVL was dense, while in RAR rats, it was relatively sparse. By 30 days, the TVL in RAR rats had almost fully regressed, while in ROP rats, it was still pronounced. By the final test age, the TVL had completely regressed in both ROP and RAR rats. In parallel, the tortuous retinal arterioles in ROP rats resolved with increasing age. ERG components indicating postreceptoral dysfunction, the b-wave, and oscillatory potentials were attenuated in ROP rats. These findings underscore the retinal vascular abnormalities and, for the first time, show abnormal anterior segment vasculature in the rat model of ROP. There is delayed regression of the TVL in the rat model of ROP. This demonstrates that ROP is a disease of the whole eye.

  14. Development of a Web-Based L-THIA 2012 Direct Runoff and Pollutant Auto-Calibration Module Using a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Chunhwa Jang

    2013-11-01

    Full Text Available The Long-Term Hydrology Impact Assessment (L-THIA model has been used as a screening evaluation tool in assessing not only urbanization, but also land-use changes on hydrology in many countries. However, L-THIA has limitations due to the number of available land-use data that can represent a watershed and the land surface complexity causing uncertainties in manually calibrating various input parameters of L-THIA. Thus, we modified the L-THIA model so that could use various (twenty three land-use categories by considering various hydrologic responses and nonpoint source (NPS pollutant loads. Then, we developed a web-based auto-calibration module by integrating a Genetic-Algorithm (GA into the L-THIA 2012 that can automatically calibrate Curve Numbers (CNs for direct runoff estimations. Based on the optimized CNs and Even Mean Concentrations (EMCs, our approach calibrated surface runoff and nonpoint source (NPS pollution loads by minimizing the differences between the observed and simulated data. Here, we used default EMCs of biochemical oxygen demand (BOD, total nitrogen (TN, and total phosphorus-TP (as the default values to L-THIA collected at various local regions in South Korea corresponding to the classifications of different rainfall intensities and land use for improving predicted NPS pollutions. For assessing the model performance, the Yeoju-Gun and Icheon-Si sites in South Korea were selected. The calibrated runoff and NPS (BOD, TN, and TP pollutions matched the observations with the correlation (R2: 0.908 for runoff and R2: 0.882–0.981 for NPS and Nash-Sutcliffe Efficiency (NSE: 0.794 for runoff and NSE: 0.882–0.981 for NPS for the sites. We also compared the NPS pollution differences between the calibrated and averaged (default EMCs. The calibrated TN and TP (only for Yeoju-Gun EMCs-based pollution loads identified well with the measured data at the study sites, but the BOD loads with the averaged EMCs were slightly better than

  15. Efficient sequential and parallel algorithms for finding edit distance based motifs.

    Science.gov (United States)

    Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar

    2016-08-18

    Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in

  16. Effects of MCF2L2, ADIPOQ and SOX2 genetic polymorphisms on the development of nephropathy in type 1 Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Gu Harvest F

    2010-07-01

    Full Text Available Abstract Background MCF2L2, ADIPOQ and SOX2 genes are located in chromosome 3q26-27, which is linked to diabetic nephropathy (DN. ADIPOQ and SOX2 genetic polymorphisms are found to be associated with DN. In the present study, we first investigated the association between MCF2L2 and DN, and then evaluated effects of these three genes on the development of DN. Methods A total of 1177 type 1 diabetes patients with and without DN from the GoKinD study were genotyped with TaqMan allelic discrimination. All subjects were of European descent. Results Leu359Ile T/G variant in the MCF2L2 gene was found to be associated with DN in female subjects (P = 0.017, OR = 0.701, 95%CI 0.524-0.938 but not in males. The GG genotype carriers among female patients with DN had tendency decreased creatinine and cystatin levels compared to the carriers with either TT or TG genotypes. This polymorphism MCF2L2-rs7639705 together with SNPs of ADIPOQ-rs266729 and SOX2-rs11915160 had combined effects on decreased risk of DN in females (P = 0.001. Conclusion The present study provides evidence that MCF2L2, ADIPOQ and SOX2 genetic polymorphisms have effects on the resistance of DN in female T1D patients, and suggests that the linkage with DN in chromosome 3q may be explained by the cumulated genetic effects.

  17. Mechanism for the decrease in the FIP1L1-PDGFRalpha protein level in EoL-1 cells by histone deacetylase inhibitors.

    Science.gov (United States)

    Ishihara, Kenji; Kaneko, Motoko; Kitamura, Hajime; Takahashi, Aki; Hong, Jang Ja; Seyama, Toshio; Iida, Koji; Wada, Hiroshi; Hirasawa, Noriyasu; Ohuchi, Kazuo

    2008-01-01

    Acetylation and deacetylation of proteins occur in cells in response to various stimuli, and are reversibly catalyzed by histone acetyltransferase and histone deacetylase (HDAC), respectively. EoL-1 cells have an FIP1L1-PDGFRA fusion gene that causes transformation of eosinophilic precursor cells into leukemia cells. The HDAC inhibitors apicidin and n-butyrate suppress the proliferation of EoL-1 cells and induce differentiation into eosinophils by a decrease in the protein level of FIP1L1-PDGFRalpha without affecting the mRNA level for FIP1L1-PDGFRA. In this study, we analyzed the mechanism by which the protein level of FIP1L1-PDGFRalpha is decreased by apicidin and n-butyrate. EoL-1 cells were incubated in the presence of the HDAC inhibitors apicidin, trichostatin A or n-butyrate. The protein levels of FIP1L1-PDGFRalpha and phosphorylated eIF-2alpha were determined by Western blotting. Actinomycin D and cycloheximide were used to block RNA synthesis and protein synthesis, respectively, in the chasing experiment of the amount of FIP1L1-PDGFRalpha protein. When apicidin- and n-butyrate-treated EoL-1 cells were incubated in the presence of actinomycin D, the decrease in the protein level of FIP1L1-PDGFRalpha was significantly enhanced when compared with controls. In contrast, the protein levels were not changed by cycloheximide among these groups. Apicidin and n-butyrate induced the continuous phosphorylation of eIF-2alpha for up to 8 days. The decrease in the level of FIP1L1-PDGFRalpha protein by continuous inhibition of HDAC may be due to the decrease in the translation rate of FIP1L1-PDGFRA. Copyright 2008 S. Karger AG, Basel.

  18. Final guidance document for extended Level 2 PSA Volume 1. Summary report for external hazards implementation in extended L2 PSA, validation of SAMG strategy and complement of ASAMPSA2 L2PSA guidance

    International Nuclear Information System (INIS)

    Loeffler, H.; Raimond, E.

    2016-01-01

    The present document is a summary of the deliverables produced within the ASAMPSA-E project for extended L2 PSA. These deliverables are: D30.7 vol. 2, 'Implementing external Events modelling in Level 2 PSA': D30.7 vol. 3: 'Verification and improvement of SAM strategy: D30.7 vol. 4: 'Consideration of shutdown states, spent fuel pools and recent R and D results'. Among many others, the following summary statements are provided: Analyses of external events: - No need for new methodology, - It is necessary to develop L1 PSA first and then clearly defined boundary conditions for the L2 PSA must be generated, - The remaining challenge is how to address adverse environmental conditions due to external hazards. Multi units: - No practical methodology exists to treat the problem, - A new methodology is necessary to be developed first for the L1 PSA. This should, from the beginning, take into account the specific needs of L2 PSA so that the boundary conditions for subsequent level 2 analysis can be generated adequately. SAM strategies verification and improvement: - L2 PSA methodology can usefully by applied and experience exists for internal initiating events L2 PSA, - How to address adverse environmental conditions due to external hazards - needs for new methodology or examples of experience, - How to model the decision process when there is a conflict of interest - needs for new methodology or examples of experience. For L2 PSA in shutdown states with open RPV, some new technical issues (fission product release, thermal load to structures above RPV) have to be addressed. Spent fuel pool issues have been developed, in particular: - Heat load from the melting spent fuel to structures above (e.g. to the containment roof) is a severe challenge for the plant and for the present-day, methodology is missing. Recent R and D achievements with relevance for L2 PSA: - Basic research has been continued in the radiochemistry (iodine and ruthenium chemistry) field, but the existing

  19. Corrosion inhibition of iron in 0.5 mol L-1 H2SO4 by halide ions

    Directory of Open Access Journals (Sweden)

    Jeyaprabha C.

    2006-01-01

    Full Text Available The inhibition effect of halide ions such as iodide, bromide and chloride ions on the corrosion of iron in 0.5 mol L-1 H2SO4 and the adsorption behaviour of these ions on the electrode surface have been studied by polarization and impedance methods. It has been found that the inhibition of nearly 90% has been observed for iodide ions at 2.5 10-3 mol L-1, for bromide ions at 10 10-3 mol L-1 and 80% for chloride ions at 2.5 10-3 mol L-1. The inhibition effect is increased with increase of halide ions concentration in the case of I- and Br- ions, whereas it has decreased in the case of Cl- ion at concentrations higher than 5 10-3 mol L-1. The double layer capacitance values have decreased considerably in the presence of halide ions which indicate that these anions are adsorbed on iron at the corrosion potential.

  20. Bog bilberry (Vaccinium uliginosum L.) extract reduces cultured Hep-G2, Caco-2, and 3T3-L1 cell viability, affects cell cycle progression, and has variable effects on membrane permeability.

    Science.gov (United States)

    Liu, Jia; Zhang, Wei; Jing, Hao; Popovich, David G

    2010-04-01

    Bog bilberry (Vaccinium uliginosum L.) is a blue-pigmented edible berry related to bilberry (Vaccinium myrtillus L.) and the common blueberry (Vaccinium corymbosum). The objective of this study was to investigate the effect of a bog bilberry anthocyanin extract (BBAE) on cell growth, membrane permeability, and cell cycle of 2 malignant cancer cell lines, Caco-2 and Hep-G2, and a nonmalignant murine 3T3-L1 cell line. BBAE contained 3 identified anthocyanins. The most abundant anthocyanin was cyanidin-3-glucoside (140.9 +/- 2.6 microg/mg of dry weight), followed by malvidin-3-glucoside (10.3 +/- 0.3 microg/mg) and malvidin-3-galactoside (8.1 +/- 0.4 microg/mg). Hep-G2 LC50 was calculated to be 0.563 +/- 0.04 mg/mL, Caco-2 LC50 was 0.390 +/- 0.30 mg/mL and 0.214 +/- 0.02 mg/mL for 3T3-L1 cells. LDH release, a marker of membrane permeability, was significantly increased in Hep-G2 cells and Caco-2 cells after 48 and 72 h compared to 24 h. The increase was 21% at 48 h and 57% at 72 h in Caco-2 cells and 66% and 139% in Hep-G2 cells compared to 24 h. However, 3T3-L1 cells showed an unexpected significant lower LDH activity (P < or = 0.05) after 72 h of exposure corresponding to a 21% reduction in LDH release. BBAE treatment increased sub-G1 in all 3 cell lines without influencing cells in the G2/M phase. BBAE treatment reduced the growth and increased the accumulation of sub-G1 cells in 2 malignant and 1 nonmalignant cell line; however, the effect on membrane permeability differs considerably between the malignant and nonmalignant cells and may in part be due to differences in cellular membrane composition.

  1. Sensitivity computation of the ell1 minimization problem and its application to dictionary design of ill-posed problems

    Science.gov (United States)

    Horesh, L.; Haber, E.

    2009-09-01

    The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.

  2. On the Role of L1 Markedness and L2 Input Robustness in Determining Potentially Fossilizable Language Forms in Iranian EFL Learners' Writing

    Science.gov (United States)

    Nushi, Musa

    2016-01-01

    Han's (2009, 2013) selective fossilization hypothesis (SFH) claims that L1 markedness and L2 input robustness determine the fossilizability (and learnability) of an L2 feature. To test the validity of the model, a pseudo-longitudinal study was designed in which the errors in the argumentative essays of 52 Iranian EFL learners were identified and…

  3. From Enumerating to Generating: A Linear Time Algorithm for Generating 2D Lattice Paths with a Given Number of Turns

    Directory of Open Access Journals (Sweden)

    Ting Kuo

    2015-05-01

    Full Text Available We propose a linear time algorithm, called G2DLP, for generating 2D lattice L(n1, n2 paths, equivalent to two-item  multiset permutations, with a given number of turns. The usage of turn has three meanings: in the context of multiset permutations, it means that two consecutive elements of a permutation belong to two different items; in lattice path enumerations, it means that the path changes its direction, either from eastward to northward or from northward to eastward; in open shop scheduling, it means that we transfer a job from one type of machine to another. The strategy of G2DLP is divide-and-combine; the division is based on the enumeration results of a previous study and is achieved by aid of an integer partition algorithm and a multiset permutation algorithm; the combination is accomplished by a concatenation algorithm that constructs the paths we require. The advantage of G2DLP is twofold. First, it is optimal in the sense that it directly generates all feasible paths without visiting an infeasible one. Second, it can generate all paths in any specified order of turns, for example, a decreasing order or an increasing order. In practice, two applications, scheduling and cryptography, are discussed.

  4. Fabrication of a new samarium(III) ion-selective electrode based on 3-{l_brace}[2-oxo-1(2h)-acenaphthylenyliden]amino{r_brace}-2-thioxo -1,3-thiazolidin-4-one

    Energy Technology Data Exchange (ETDEWEB)

    Zamani, Hassan Ali [Islamic Azad University, Quchan (Iran, islamic Republic of). Quchan Branch. Dept. of Chemistry]. E-mail: haszamani@yahoo.com; Ganjali, Mohammad Reza [Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of). Endocrine and Metabolism Research Center; Adib, Mehdi [University of Tehran, Tehran (Iran, Islamic Republic of). Faculty of Chemistry. Center of Excellence in Electrochemistry

    2007-07-01

    This paper introduces the development of an original PVC membrane electrode, based on 3-{l_brace}[2-oxo-1(2H)-acenaphthylenyliden]amino{r_brace}-2-thioxo-1,3-thiazolidin-4-one (ATTO) which has revealed to be a suitable carrier for Sm{sup 3+} ions. The resulting data illustrated that the electrode shows a Nernstian slope of 19.3 {+-} 0.6 mV per decade for Sm{sup 3+} ions over a broad working concentration range of 1.0 X 10{sup -6} to 1.0 X 10{sup -1} mol L{sup -1}. The lower detection limit was found to be equal to (5.5{+-} 0.3) X 10{sup -7} mol L{sup -}'1 in the pH range of 3.5-7.5, and the response time was very short ({approx}10 s). The potentiometric sensor displayed good selectivities for a number of cations such as alkali, alkaline earth, transition and heavy metal ions. (author)

  5. Lexical knowledge of Serbian L1 English L2 learners: Reception vs. production

    Directory of Open Access Journals (Sweden)

    Danilović-Jeremić Jelena

    2015-01-01

    Full Text Available The acquisition of lexical knowledge in a second/foreign language is often investigated by means of vocabulary size tests which assess two aspects of the learners' competence: reception and production. Estimates of these two dimensions, as well as the (potential gap between them, have important pedagogical implications in that they indicate the degree to which the learners can comprehend or use the language autonomously. Therefore, the aim of this paper is to explore the vocabulary size of three generations of B2-level L2 learners (CEFR, first-year students majoring in English at the Faculty of Philology and Arts in Kragujevac, Serbia, by means of Vocabulary Levels Tests (Laufer & Nation, 1999; Nation, 1990. The results of the statistical analyses show that the receptive vocabulary of Serbian L2 learners is much more developed than their productive vocabulary, and that the gap between lexical production and reception changes depending on the frequency of the lexemes and the proficiency level of L2 learners. The findings imply that, at the primary and secondary level of education, more attention should be paid to the development of productive lexical knowledge which is crucial not only for success in English degree courses but communication in English in general.

  6. FROM ENGLISH AS L1 TO PORTUGUESE AS L3 THROUGH SPANISH AS L2: TRANSFERS IN VERB REGENCY/TRANSITIVITY, WITH SPECIAL EMPHASIS ON PREPOSITIONS

    OpenAIRE

    RENATA DE OLIVEIRA RAZUK

    2008-01-01

    Do Inglês L1 ao Português L3 passando pelo Espanhol L2: transferências em regência/transitividade verbal, com foco nas preposições transita por temas pouco explorados - não só pela combinação de línguas adotada, como também pela especificidade do fenômeno analisado e pelo tópico gramatical escolhido -, contribuindo para o desenvolvimento de uma área de pesquisa extremamente recente e promissora: a aquisição de terceira língua. Os estudos em AL3 ainda estão mui...

  7. Transport and activation of S-(1,2-dichlorovinyl)-L-cysteine and N-acetyl-S-(1,2-dichlorovinyl)-L-cysteine in rat kidney proximal tubules

    International Nuclear Information System (INIS)

    Zhang, G.H.; Stevens, J.L.

    1989-01-01

    An important step in understanding the mechanism underlying the tubular specificity of the nephrotoxicity of toxic cysteine conjugates is to identify the rate-limiting steps in their activation. The rate-limiting steps in the activation of toxic cysteine conjugates were characterized using isolated proximal tubules from the rat and 35S-labeled S-(1,2-dichlorovinyl)-L-cysteine (DCVC) and N-acetyl-S-(1,2-dichlorovinyl)-L-cysteine (NAC-DCVC) as model compounds. The accumulation by tubules of 35S radiolabel from both DCVC and NAC-DCVC was time and temperature dependent and was mediated by both Na+-dependent and independent processes. Kinetic studies with DCVC in the presence of sodium revealed the presence of two components with apparent Km and Vmax values of (1) 46 microM and 0.21 nmol/mg min and (2) 2080 microM and 7.3 nmol/mg.min. NAC-DVVC uptake was via a single system with apparent Km and Vmax values of 157 microM and 0.65 nmol/mg.min, respectively. Probenecid, an inhibitor of the renal organic anion transport system, inhibited accumulation of radiolabel from NAC-DCVC, but not from DCVC. The covalent binding of 35S label to cellular macromolecules was much greater from [35S]DCVC than from NAC-[35S]DCVC. Analysis of metabolites showed that a substantial amount of the cellular NAC-[35S]DCVC was unmetabolized while [35S]DCVC was rapidly metabolized to bound 35S-labeled material and unidentified products. The data suggest that DCVC is rapidly metabolized following transport, but that activation of NAC-DCVC depends on a slower rate of deacetylation. The results are discussed with regard to the segment specificity of cysteine conjugate toxicity and the role of disposition in vivo in the nephrotoxicity of glutathione conjugates

  8. User's Manual for the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)

    Science.gov (United States)

    Gnoffo, Peter A.; Cheatwood, F. McNeil

    1996-01-01

    This user's manual provides detailed instructions for the installation and the application of version 4.1 of the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA). Also provides simulation of flow field in thermochemical nonequilibrium around vehicles traveling at hypersonic velocities through the atmosphere. Earlier versions of LAURA were predominantly research codes, and they had minimal (or no) documentation. This manual describes UNIX-based utilities for customizing the code for special applications that also minimize system resource requirements. The algorithm is reviewed, and the various program options are related to specific equations and variables in the theoretical development.

  9. Multivalent human papillomavirus l1 DNA vaccination utilizing electroporation.

    Directory of Open Access Journals (Sweden)

    Kihyuck Kwak

    Full Text Available Naked DNA vaccines can be manufactured simply and are stable at ambient temperature, but require improved delivery technologies to boost immunogenicity. Here we explore in vivo electroporation for multivalent codon-optimized human papillomavirus (HPV L1 and L2 DNA vaccination.Balb/c mice were vaccinated three times at two week intervals with a fusion protein comprising L2 residues ∼11-88 of 8 different HPV types (11-88×8 or its DNA expression vector, DNA constructs expressing L1 only or L1+L2 of a single HPV type, or as a mixture of several high-risk HPV types and administered utilizing electroporation, i.m. injection or gene gun. Serum was collected two weeks and 3 months after the last vaccination. Sera from immunized mice were tested for in-vitro neutralization titer, and protective efficacy upon passive transfer to naive mice and vaginal HPV challenge. Heterotypic interactions between L1 proteins of HPV6, HPV16 and HPV18 in 293TT cells were tested by co-precipitation using type-specific monoclonal antibodies.Electroporation with L2 multimer DNA did not elicit detectable antibody titer, whereas DNA expressing L1 or L1+L2 induced L1-specific, type-restricted neutralizing antibodies, with titers approaching those induced by Gardasil. Co-expression of L2 neither augmented L1-specific responses nor induced L2-specific antibodies. Delivery of HPV L1 DNA via in vivo electroporation produces a stronger antibody response compared to i.m. injection or i.d. ballistic delivery via gene gun. Reduced neutralizing antibody titers were observed for certain types when vaccinating with a mixture of L1 (or L1+L2 vectors of multiple HPV types, likely resulting from heterotypic L1 interactions observed in co-immunoprecipitation studies. High titers were restored by vaccinating with individual constructs at different sites, or partially recovered by co-expression of L2, such that durable protective antibody titers were achieved for each type

  10. GPS 2.1: enhanced prediction of kinase-specific phosphorylation sites with an algorithm of motif length selection.

    Science.gov (United States)

    Xue, Yu; Liu, Zexian; Cao, Jun; Ma, Qian; Gao, Xinjiao; Wang, Qingqi; Jin, Changjiang; Zhou, Yanhong; Wen, Longping; Ren, Jian

    2011-03-01

    As the most important post-translational modification of proteins, phosphorylation plays essential roles in all aspects of biological processes. Besides experimental approaches, computational prediction of phosphorylated proteins with their kinase-specific phosphorylation sites has also emerged as a popular strategy, for its low-cost, fast-speed and convenience. In this work, we developed a kinase-specific phosphorylation sites predictor of GPS 2.1 (Group-based Prediction System), with a novel but simple approach of motif length selection (MLS). By this approach, the robustness of the prediction system was greatly improved. All algorithms in GPS old versions were also reserved and integrated in GPS 2.1. The online service and local packages of GPS 2.1 were implemented in JAVA 1.5 (J2SE 5.0) and freely available for academic researches at: http://gps.biocuckoo.org.

  11. Canonical Primal-Dual Method for Solving Non-convex Minimization Problems

    OpenAIRE

    Wu, Changzhi; Li, Chaojie; Gao, David Yang

    2012-01-01

    A new primal-dual algorithm is presented for solving a class of non-convex minimization problems. This algorithm is based on canonical duality theory such that the original non-convex minimization problem is first reformulated as a convex-concave saddle point optimization problem, which is then solved by a quadratically perturbed primal-dual method. %It is proved that the popular SDP method is indeed a special case of the canonical duality theory. Numerical examples are illustrated. Comparing...

  12. Reversing multidrug resistance in Caco-2 by silencing MDR1, MRP1, MRP2, and BCL-2/BCL-xL using liposomal antisense oligonucleotides.

    Directory of Open Access Journals (Sweden)

    Yu-Li Lo

    Full Text Available Multidrug resistance (MDR is a major impediment to chemotherapy. In the present study, we designed antisense oligonucleotides (ASOs against MDR1, MDR-associated protein (MRP1, MRP2, and/or BCL-2/BCL-xL to reverse MDR transporters and induce apoptosis, respectively. The cationic liposomes (100 nm composed of N-[1-(2,3-dioleyloxypropyl]-n,n,n-trimethylammonium chloride and dioleoyl phosphotidylethanolamine core surrounded by a polyethylene glycol (PEG shell were prepared to carry ASOs and/or epirubicin, an antineoplastic agent. We aimed to simultaneously suppress efflux pumps, provoke apoptosis, and enhance the chemosensitivity of human colon adenocarcinoma Caco-2 cells to epirubicin. We evaluated encapsulation efficiency, particle size, cytotoxicity, intracellular accumulation, mRNA levels, cell cycle distribution, and caspase activity of these formulations. We found that PEGylated liposomal ASOs significantly reduced Caco-2 cell viability and thus intensified epirubicin-mediated apoptosis. These formulations also decreased the MDR1 promoter activity levels and enhanced the intracellular retention of epirubicin in Caco-2 cells. Epirubicin and ASOs in PEGylated liposomes remarkably decreased mRNA expression levels of human MDR1, MRP1, MRP2, and BCL-2. The combined treatments all significantly increased the mRNA expressions of p53 and BAX, and activity levels of caspase-3, -8, and -9. The formulation of epirubicin and ASOs targeting both pump resistance of MDR1, MRP1, and MRP2 and nonpump resistance of BCL-2/BCL-xL demonstrated more superior effect to all the other formulations used in this study. Our results provide a novel insight into the mechanisms by which PEGylated liposomal ASOs against both resistance types act as activators to epirubicin-induced apoptosis through suppressing MDR1, MRP1, and MRP2, as well as triggering intrinsic mitochondrial and extrinsic death receptor pathways. The complicated regulation of MDR highlights the necessity

  13. Evaluation of a novel tool for bone graft delivery in minimally invasive transforaminal lumbar interbody fusion

    Directory of Open Access Journals (Sweden)

    Kleiner JB

    2016-05-01

    Full Text Available Jeffrey B Kleiner, Hannah M Kleiner, E John Grimberg Jr, Stefanie J Throlson The Spine Center of Innovation, The Medical Center of Aurora, Aurora, CO, USA Study design: Disk material removed (DMR during L4-5 and L5-S1 transforaminal lumbar interbody fusion (T-LIF surgery was compared to the corresponding bone graft (BG volumes inserted at the time of fusion. A novel BG delivery tool (BGDT was used to apply the BG. In order to establish the percentage of DMR during T-LIF, it was compared to DMR during anterior diskectomy (AD. This study was performed prospectively. Summary of background data: Minimal information is available as to the volume of DMR during a T-LIF procedure, and the relationship between DMR and BG delivered is unknown. BG insertion has been empiric and technically challenging. Since the volume of BG applied to the prepared disk space likely impacts the probability of arthrodesis, an investigation is justified. Methods: A total of 65 patients with pathology at L4-5 and/or L5-S1 necessitating fusion were treated with a minimally invasive T-LIF procedure. DMR was volumetrically measured during disk space preparation. BG material consisting of local autograft, BG extender, and bone marrow aspirate were mixed to form a slurry. BG slurry was injected into the disk space using a novel BGDT and measured volumetrically. An additional 29 patients who were treated with L5-S1 AD were compared to L5-S1 T-LIF DMR to determine the percent of T-LIF DMR relative to AD. Results: DMR volumes averaged 3.6±2.2 mL. This represented 34% of the disk space relative to AD. The amount of BG delivered to the disk spaces was 9.3±3.2 mL, which is 2.6±2.2 times the amount of DMR. The BGDT allowed uncomplicated filling of the disk space in <1 minute. Conclusion: The volume of DMR during T-LIF allows for a predictable volume of BG delivery. The BGDT allowed complete filling of the entire prepared disk space. The T-LIF diskectomy debrides 34% of the disk

  14. Minimizing the effects of oxygen interference on l-lactate sensors by a single amino acid mutation in Aerococcus viridansl-lactate oxidase.

    Science.gov (United States)

    Hiraka, Kentaro; Kojima, Katsuhiro; Lin, Chi-En; Tsugawa, Wakako; Asano, Ryutaro; La Belle, Jeffrey T; Sode, Koji

    2018-04-30

    l-lactate biosensors employing l-lactate oxidase (LOx) have been developed mainly to measure l-lactate concentration for clinical diagnostics, sports medicine, and the food industry. Some l-lactate biosensors employ artificial electron mediators, but these can negatively impact the detection of l-lactate by competing with the primary electron acceptor: molecular oxygen. In this paper, a strategic approach to engineering an AvLOx that minimizes the effects of oxygen interference on sensor strips was reported. First, we predicted an oxygen access pathway in Aerococcus viridans LOx (AvLOx) based on its crystal structure. This was subsequently blocked by a bulky amino acid substitution. The resulting Ala96Leu mutant showed a drastic reduction in oxidase activity using molecular oxygen as the electron acceptor and a small increase in dehydrogenase activity employing an artificial electron acceptor. Secondly, the Ala96Leu mutant was immobilized on a screen-printed carbon electrode using glutaraldehyde cross-linking method. Amperometric analysis was performed with potassium ferricyanide as an electron mediator under argon or atmospheric conditions. Under argon condition, the response current increased linearly from 0.05 to 0.5mM l-lactate for both wild-type and Ala96Leu. However, under atmospheric conditions, the response of wild-type AvLOx electrode was suppressed by 9-12% due to oxygen interference. The Ala96Leu mutant maintained 56-69% of the response current at the same l-lactate level and minimized the relative bias error to -19% from -49% of wild-type. This study provided significant insight into the enzymatic reaction mechanism of AvLOx and presented a novel approach to minimize oxygen interference in sensor applications, which will enable accurate detection of l-lactate concentrations. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Electrode Potentials of l-Tryptophan, l-Tyrosine, 3-Nitro-l-tyrosine, 2,3-Difluoro-l-tyrosine, and 2,3,5-Trifluoro-l-tyrosine.

    Science.gov (United States)

    Mahmoudi, Leila; Kissner, Reinhard; Nauser, Thomas; Koppenol, Willem H

    2016-05-24

    Electrode potentials for aromatic amino acid radical/amino acid couples were deduced from cyclic voltammograms and pulse radiolysis experiments. The amino acids investigated were l-tryptophan, l-tyrosine, N-acetyl-l-tyrosine methyl ester, N-acetyl-3-nitro-l-tyrosine ethyl ester, N-acetyl-2,3-difluoro-l-tyrosine methyl ester, and N-acetyl-2,3,5-trifluoro-l-tyrosine methyl ester. Conditional potentials were determined at pH 7.4 for all compounds listed; furthermore, Pourbaix diagrams for l-tryptophan, l-tyrosine, and N-acetyl-3-nitro-l-tyrosine ethyl ester were obtained. Electron transfer accompanied by proton transfer is reversible, as confirmed by detailed analysis of the current waves, and because the slopes of the Pourbaix diagrams obey Nernst's law. E°'(Trp(•),H(+)/TrpH) and E°'(TyrO(•),H(+)/TyrOH) at pH 7 are 0.99 ± 0.01 and 0.97 ± 0.01 V, respectively. Pulse radiolysis studies of two dipeptides that contain both amino acids indicate a difference in E°' of approximately 0.06 V. Thus, in small peptides, we recommend values of 1.00 and 0.96 V for E°'(Trp(•),H(+)/TrpH) and E°'(TyrO(•),H(+)/TyrOH), respectively. The electrode potential of N-acetyl-3-nitro-l-tyrosine ethyl ester is higher, while because of mesomeric stabilization of the radical, those of N-acetyl-2,3-difluoro-l-tyrosine methyl ester and N-acetyl-2,3,5-trifluoro-l-tyrosine methyl ester are lower than that of tyrosine. Given that the electrode potentials at pH 7 of E°'(Trp(•),H(+)/TrpH) and E°'(TyrO(•),H(+)/TyrOH) are nearly equal, they would be, in principle, interchangeable. Proton-coupled electron transfer pathways in proteins that use TrpH and TyrOH are thus nearly thermoneutral.

  16. Minimal residual HIV viremia: verification of the Abbott Real-Time HIV-1 assay sensitivity

    Directory of Open Access Journals (Sweden)

    Alessandra Amendola

    2010-06-01

    Full Text Available Introduction: In the HIV-1 infection, the increase in number of CD4 T lymphocytes and the viral load decline are the main indicators of the effectiveness of antiretroviral therapy. On average, 85% of patients receiving effective treatment has a persistent suppression of plasma viral load below the detection limit (<50 copies/mL of clinically used viral load assays, regardless of treatment regimen in use. It is known, however, that, even when viremia is reduced below the sensitivity limit of current diagnostic assays, the virus persists in “reservoirs” and traces of free virions can be detected in plasma.There is a considerable interest to investigate the clinical significance of residual viremia. Advances in molecular diagnostics allows nowadays to couple a wide dynamic range to a high sensitivity.The Abbott Real-time HIV-1 test is linear from 40 to 107 copies/mL and provides, below 40 copies/mL, additional information such as “<40cp/mL, target detected” or “target not detected”. The HIV-1 detection is verified by the max-Ratio algorithm software.We assessed the test sensitivity when the qualitative response is considered as well. Methods: A ‘probit’ analysis was performed using dilutions of the HIV-1 RNA Working Reagent 1 for NAT assays (NIBSC code: 99/634, defined in IU/mL and different from that used by the manufacturer (VQA,Virology Quality Assurance Laboratory of the AIDS Clinical Trial Group for standardization and definition of performances.The sample input volume (0.6 mL was the same used in clinical routine. A total of 196 replicates at concentrations decreasing from 120 to 5 copies/mL, in three different sessions, have been tested.The ‘probit’ analysis (binomial dose-response model, 95% “hit-rate” has been carried out on the SAS 9.1.3 software package. Results: The sensitivity of the “<40cp/mL, target detected” response was equal to 28,76 copies/mL, with 95% confidence limits between 22,19 and 52,27 copies/mL

  17. Appearance of a Minimal Length in $e^+ e^-$ Annihilation

    CERN Document Server

    Dymnikova, Irina; Ulbricht, Jürgen

    2014-01-01

    Experimental data reveal with a 5$\\sigma$ significance the existence of a characteristic minimal length $l_e$= 1.57 × 10$^{−17}$ cm at the scale E = 1.253 TeV in the annihilation reaction $e^+e^- \\to \\gamma\\gamma(\\gamma)$ . Nonlinear electrodynamics coupled to gravity and satisfying the weak energy condition predicts, for an arbitrary gauge invariant Lagrangian, the existence of spinning charged electromagnetic soliton asymptotically Kerr-Newman for a distant observer with the gyromagnetic ratio g=2 . Its internal structure includes a rotating equatorial disk of de Sitter vacuum which has properties of a perfect conductor and ideal diamagnetic, displays superconducting behavior, supplies a particle with the finite positive electromagnetic mass related to breaking of space-time symmetry, and gives some idea about the physical origin of a minimal length in annihilation.

  18. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    International Nuclear Information System (INIS)

    Cieri, D.

    2016-01-01

    At the HL-LHC, proton bunches collide every 25 ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5 μs. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate . Goal of this new track trigger will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the “MP7”, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough transform technique are currently under investigation: one utilizes a systolic array to represent the Hough space, while the other exploits a pipelined approach. (paper)

  19. The Quest for Minimal Quotients for Probabilistic Automata

    DEFF Research Database (Denmark)

    Eisentraut, Christian; Hermanns, Holger; Schuster, Johann

    2013-01-01

    One of the prevailing ideas in applied concurrency theory and verification is the concept of automata minimization with respect to strong or weak bisimilarity. The minimal automata can be seen as canonical representations of the behaviour modulo the bisimilarity considered. Together with congruence...... results wrt. process algebraic operators, this can be exploited to alleviate the notorious state space explosion problem. In this paper, we aim at identifying minimal automata and canonical representations for concurrent probabilistic models. We present minimality and canonicity results for probabilistic...... automata wrt. strong and weak bisimilarity, together with polynomial time minimization algorithms....

  20. Optimal Allocation of Renewable Energy Sources for Energy Loss Minimization

    Directory of Open Access Journals (Sweden)

    Vaiju Kalkhambkar

    2017-03-01

    Full Text Available Optimal allocation of renewable distributed generation (RDG, i.e., solar and the wind in a distribution system becomes challenging due to intermittent generation and uncertainty of loads. This paper proposes an optimal allocation methodology for single and hybrid RDGs for energy loss minimization. The deterministic generation-load model integrated with optimal power flow provides optimal solutions for single and hybrid RDG. Considering the complexity of the proposed nonlinear, constrained optimization problem, it is solved by a robust and high performance meta-heuristic, Symbiotic Organisms Search (SOS algorithm. Results obtained from SOS algorithm offer optimal solutions than Genetic Algorithm (GA, Particle Swarm Optimization (PSO and Firefly Algorithm (FFA. Economic analysis is carried out to quantify the economic benefits of energy loss minimization over the life span of RDGs.