WorldWideScience

Sample records for regular prn-ideal percentage

  1. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  2. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  3. Percentage Retail Mark-Ups

    OpenAIRE

    Thomas von Ungern-Sternberg

    1999-01-01

    A common assumption in the literature on the double marginalization problem is that the retailer can set his mark-up only in the second stage of the game after the producer has moved. To the extent that the sequence of moves is designed to reflect the relative bargaining power of the two parties it is just as plausible to let the retailer move first. Furthermore, retailers frequently calculate their selling prices by adding a percentage mark-up to their wholesale prices. This allows a retaile...

  4. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  5. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  6. Percentage Energy from Fat Screener: Overview

    Science.gov (United States)

    A short assessment instrument to estimate an individual's usual intake of percentage energy from fat. The foods asked about on the instrument were selected because they were the most important predictors of variability in percentage energy.

  7. Solving Problems with the Percentage Bar

    Science.gov (United States)

    van Galen, Frans; van Eerde, Dolly

    2013-01-01

    At the end of primary school all children more of less know what a percentage is, but yet they often struggle with percentage problems. This article describes a study in which students of 13 and 14 years old were given a written test with percentage problems and a week later were interviewed about the way they solved some of these problems. In a…

  8. Making Sense of Fractions and Percentages

    Science.gov (United States)

    Whitin, David J.; Whitin, Phyllis

    2012-01-01

    Because fractions and percentages can be difficult for children to grasp, connecting them whenever possible is beneficial. Linking them can foster representational fluency as children simultaneously see the part-whole relationship expressed numerically (as a fraction and as a percentage) and visually (as a pie chart). NCTM advocates these…

  9. Percentage of Fast-Track Receipts

    Data.gov (United States)

    Social Security Administration — The dataset provides the percentage of fast-track receipts by state during the reporting fiscal year. Fast-tracked cases consist of those cases identified as Quick...

  10. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  11. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  12. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  13. Maximizing percentage depletion in solid minerals

    International Nuclear Information System (INIS)

    Tripp, J.; Grove, H.D.; McGrath, M.

    1982-01-01

    This article develops a strategy for maximizing percentage depletion deductions when extracting uranium or other solid minerals. The goal is to avoid losing percentage depletion deductions by staying below the 50% limitation on taxable income from the property. The article is divided into two major sections. The first section is comprised of depletion calculations that illustrate the problem and corresponding solutions. The last section deals with the feasibility of applying the strategy and complying with the Internal Revenue Code and appropriate regulations. Three separate strategies or appropriate situations are developed and illustrated. 13 references, 3 figures, 7 tables

  14. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  15. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  16. The Language of Comparisons: Communicating about Percentages

    Directory of Open Access Journals (Sweden)

    Jessica Polito

    2014-01-01

    Full Text Available While comparisons between percentages or rates appear frequently in journalism and advertising, and are an essential component of quantitative writing, many students fail to understand precisely what percentages mean, and lack fluency with the language used for comparisons. After reviewing evidence demonstrating this weakness, this experience-based perspective lays out a framework for teaching the language of comparisons in a structured way, and illustrates it with several authentic examples that exemplify mistaken or misleading uses of such numbers. The framework includes three common types of erroneous or misleading quantitative writing: the missing comparison, where a key number is omitted; the apples-to-pineapples comparison, where two subtly incomparable rates are presented; and the implied fallacy, where an invalid quantitative conclusion is left to the reader to infer.

  17. Percentage compensation arrangements: suspect, but not illegal.

    Science.gov (United States)

    Fedor, F P

    2001-01-01

    Percentage compensation arrangements, in which a service is outsourced to a contractor that is paid in accordance with the level of its performance, are widely used in many business sectors. The HHS Office of Inspector General (OIG) has shown concern that these arrangements in the healthcare industry may offer incentives for the performance of unnecessary services or cause false claims to be made to Federal healthcare programs in violation of the antikickback statute and the False Claims Act. Percentage compensation arrangements can work and need not run afoul of the law as long as the healthcare organization carefully oversees the arrangement and sets specific safeguards in place. These safeguards include screening contractors, carefully evaluating their compliance programs, and obligating them contractually to perform within the limits of the law.

  18. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  19. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  20. Uncertainties in pipeline water percentage measurement

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Bentley N.

    2005-07-01

    Measurement of the quantity, density, average temperature and water percentage in petroleum pipelines has been an issue of prime importance. The methods of measurement have been investigated and have seen continued improvement over the years. Questions are being asked as to the reliability of the measurement of water in the oil through sampling systems originally designed and tested for a narrow range of densities. Today most facilities sampling systems handle vastly increased ranges of density and types of crude oils. Issues of pipeline integrity, product loss and production balances are placing further demands on the issues of accurate measurement. Water percentage is one area that has not received the attention necessary to understand the many factors involved in making a reliable measurement. A previous paper1 discussed the issues of uncertainty of the measurement from a statistical perspective. This paper will outline many of the issues of where the errors lie in the manual and automatic methods in use today. A routine to use the data collected by the analyzers in the on line system for validation of the measurements will be described. (author) (tk)

  1. Predicted percentage dissatisfied with ankle draft.

    Science.gov (United States)

    Liu, S; Schiavon, S; Kabanshi, A; Nazaroff, W W

    2017-07-01

    Draft is unwanted local convective cooling. The draft risk model of Fanger et al. (Energy and Buildings 12, 21-39, 1988) estimates the percentage of people dissatisfied with air movement due to overcooling at the neck. There is no model for predicting draft at ankles, which is more relevant to stratified air distribution systems such as underfloor air distribution (UFAD) and displacement ventilation (DV). We developed a model for predicted percentage dissatisfied with ankle draft (PPD AD ) based on laboratory experiments with 110 college students. We assessed the effect on ankle draft of various combinations of air speed (nominal range: 0.1-0.6 m/s), temperature (nominal range: 16.5-22.5°C), turbulence intensity (at ankles), sex, and clothing insulation (thermal sensation and air speed at ankles are the dominant parameters affecting draft. The seated subjects accepted a vertical temperature difference of up to 8°C between ankles (0.1 m) and head (1.1 m) at neutral whole-body thermal sensation, 5°C more than the maximum difference recommended in existing standards. The developed ankle draft model can be implemented in thermal comfort and air diffuser testing standards. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  3. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  4. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  5. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  6. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  7. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  8. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  9. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  10. 'Regular' and 'emergency' repair

    International Nuclear Information System (INIS)

    Luchnik, N.V.

    1975-01-01

    Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)

  11. Regularization of divergent integrals

    OpenAIRE

    Felder, Giovanni; Kazhdan, David

    2016-01-01

    We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.

  12. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  13. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  14. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  15. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  16. Annotation of Regular Polysemy

    DEFF Research Database (Denmark)

    Martinez Alonso, Hector

    Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...

  17. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  18. Regularities of radiation heredity

    International Nuclear Information System (INIS)

    Skakov, M.K.; Melikhov, V.D.

    2001-01-01

    One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru

  19. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  20. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  1. 75 FR 76006 - Regular Meeting

    Science.gov (United States)

    2010-12-07

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...

  2. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  3. Ensemble manifold regularization.

    Science.gov (United States)

    Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng

    2012-06-01

    We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.

  4. AC Own Motion Percentage of Randomly Sampled Cases

    Data.gov (United States)

    Social Security Administration — Longitudinal report detailing the numbers and percentages of Appeals Council (AC) own motion review actions taken on un-appealed favorable hearing level decisions...

  5. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  6. 26 CFR 1.613-1 - Percentage depletion; general rule.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Percentage depletion; general rule. 1.613-1... TAX (CONTINUED) INCOME TAXES (CONTINUED) Natural Resources § 1.613-1 Percentage depletion; general rule. (a) In general. In the case of a taxpayer computing the deduction for depletion under section 611...

  7. Determination of percentage of caffeine content in some analgesic ...

    African Journals Online (AJOL)

    Two methods were employed for the determination of percentage Caffeine content in three brands of analgesic tablets which are; Extraction using only water as a solvent and Extraction using both water and chloroform as solvents, watch glass has been used as the weighing apparatus and the percentage of Caffeine ...

  8. 78 FR 48789 - Loan Guaranty: Percentage to Determine Net Value

    Science.gov (United States)

    2013-08-09

    ... DEPARTMENT OF VETERANS AFFAIRS Loan Guaranty: Percentage to Determine Net Value AGENCY: Department... mortgage holders in the Department of Veterans Affairs (VA) loan guaranty program concerning the percentage to be used in calculating the purchase price of a property that secured a terminated loan. The new...

  9. 7 CFR 982.41 - Free and restricted percentages.

    Science.gov (United States)

    2010-01-01

    ... percentages in effect at the end of the previous marketing year shall be applicable. [51 FR 29548, Aug. 19... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... WASHINGTON Order Regulating Handling Marketing Policy § 982.41 Free and restricted percentages. The free and...

  10. determination of perce rmination of percentage of caffeine content

    African Journals Online (AJOL)

    userpc

    ABSTRACT. Two methods were employed for the deter brands of analgesic tablets which are;. Extraction using both water and chlorofor weighing apparatus and the percentage of. The percentage of caffeine using only water. Boska, and Panadol Extra were 7.40%, 5.60 caffeine using both water and chloroform i.

  11. 78 FR 33757 - Rural Determination and Financing Percentage

    Science.gov (United States)

    2013-06-05

    ... Agency for determining what percentage of a project is eligible for RUS financing if the Rural Percentage... defined as rural. As the Agency investigates financing options for projects owned by entities other than... inability to fund 100 percent of the financing needs of a given project has undermined the Agency's effort...

  12. 75 FR 53966 - Regular Meeting

    Science.gov (United States)

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  13. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  14. Nursing Home - Pain - Percentage of Residents Reporting Pain

    Data.gov (United States)

    U.S. Department of Health & Human Services — Adequate pain management is an important indicator of quality of care and quality of life. Nursing home staff should check patients regularly to see if they are...

  15. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  16. Dressing percentage and Carcass characteristics of four Indigenous ...

    African Journals Online (AJOL)

    Dressing percentage and Carcass characteristics of four Indigenous cattle breeds in Nigeria. ... Nigerian Journal of Animal Production ... Their feed intake, live and carcasses weights and the weights of their major carcass components and ...

  17. New regular black hole solutions

    International Nuclear Information System (INIS)

    Lemos, Jose P. S.; Zanchin, Vilson T.

    2011-01-01

    In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.

  18. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  19. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  20. On geodesics in low regularity

    Science.gov (United States)

    Sämann, Clemens; Steinbauer, Roland

    2018-02-01

    We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.

  1. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  2. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  3. Geometric continuum regularization of quantum field theory

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1989-01-01

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs

  4. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  5. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  6. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  7. Regular algebra and finite machines

    CERN Document Server

    Conway, John Horton

    2012-01-01

    World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg

  8. Matrix regularization of 4-manifolds

    OpenAIRE

    Trzetrzelewski, M.

    2012-01-01

    We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...

  9. Artificial neural networks for prediction of percentage of water ...

    Indian Academy of Sciences (India)

    have high compressive strengths in comparison with con- crete specimens ... presenting suitable model based on artificial neural networks. (ANNs) to ... by experimental ones to evaluate the software power for pre- dicting the ..... Figure 7. Correlation of measured and predicted percentage of water absorption values of.

  10. Quantitative trait locus (QTL) analysis of percentage grains ...

    African Journals Online (AJOL)

    user

    2011-03-28

    Mar 28, 2011 ... ATA/M-CGT; (B) AFLP results using primer E-AAA/M-CTC; (C) AFLP results using primer E-AAA/M-CTA. 1,. Minghui63; 2, Zhengshan97A; 3, low PGC bulk; 4, high PGC bulk. The arrow show linkage segments of percentage chalky grain in rice. Table 1. Chromosomal location of AFLP segments linked to ...

  11. 7 CFR 987.44 - Free and restricted percentages.

    Science.gov (United States)

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... applicable grade and size available to supply the trade demand for free dates of any variety is likely to be... effectuate the declared policy of the act, it shall recommend such percentages to the Secretary. If the...

  12. 75 FR 35098 - Federal Employees' Retirement System; Normal Cost Percentages

    Science.gov (United States)

    2010-06-21

    ... normal cost percentages and requests for actuarial assumptions and data to the Board of Actuaries, care of Gregory Kissel, Actuary, Office of Planning and Policy Analysis, Office of Personnel Management... Regulations, regulates how normal costs are determined. Recently, the Board of Actuaries of the Civil Service...

  13. Artificial neural networks for prediction of percentage of water

    Indian Academy of Sciences (India)

    ... Lecture Workshops · Refresher Courses · Symposia · Live Streaming. Home; Journals; Bulletin of Materials Science; Volume 35; Issue 6. Artificial neural networks for prediction of percentage of water absorption of geopolymers produced by waste ashes. Ali Nazari. Volume 35 Issue 6 November 2012 pp 1019-1029 ...

  14. Coral Reef Coverage Percentage on Binor Paiton-Probolinggo Seashore

    Directory of Open Access Journals (Sweden)

    Dwi Budi Wiyanto

    2016-01-01

    Full Text Available The coral reef damage in Probolinggo region was expected to be caused by several factors. The first one comes from its society that exploits fishery by using cyanide toxin and bomb. The second one goes to the extraction of coral reef, which is used as decoration or construction materials. The other factor is likely caused by the existence of large industry on the seashore, such as Electric Steam Power Plant (PLTU Paiton and others alike. Related to the development of coral reef ecosystem, availability of an accurate data is crucially needed to support the manner of future policy, so the research of coral reef coverage percentage needs to be conducted continuously. The aim of this research is to collect biological data of coral reef and to identify coral reef coverage percentage in the effort of constructing coral reef condition basic data on Binor, Paiton, and Probolinggo regency seashore. The method used in this research is Line Intercept Transect (LIT method. LIT method is a method that used to decide benthic community on coral reef based on percentage growth, and to take note of benthic quantity along transect line. Percentage of living coral coverage in 3 meters depth on this Binor Paiton seashore that may be categorized in a good condition is 57,65%. While the rest are dead coral that is only 1,45%, other life form in 23,2%, and non-life form in 17,7%. A good condition of coral reef is caused by coral reef transplantation on the seashore, so this coral reef is dominated by Acropora Branching. On the other hand, Mortality Index (IM of coral reef resulted in 24,5%. The result from observation and calculation of coral reef is dominated by Hard Coral in Acropora Branching (ACB with coral reef coverage percentage of 39%, Coral Massive (CM with coral reef coverage percentage of 2,85%, Coral Foliose (CF with coral reef coverage percentage of 1,6%, and Coral Mushroom (CRM with coral reef coverage percentage of 8,5%. Observation in 10 meters depth

  15. Coral Reef Coverage Percentage on Binor Paiton-Probolinggo Seashore

    Directory of Open Access Journals (Sweden)

    Dwi Budi Wiyanto

    2016-02-01

    Full Text Available The coral reef damage in Probolinggo region was expected to be caused by several factors. The first one comes from its society that exploits fishery by using cyanide toxin and bomb. The second one goes to the extraction of coral reef, which is used as decoration or construction materials. The other factor is likely caused by the existence of large industry on the seashore, such as Electric Steam Power Plant (PLTU Paiton and others alike. Related to the development of coral reef ecosystem, availability of an accurate data is crucially needed to support the manner of future policy, so the research of coral reef coverage percentage needs to be conducted continuously. The aim of this research is to collect biological data of coral reef and to identify coral reef coverage percentage in the effort of constructing coral reef condition basic data on Binor, Paiton, and Probolinggo regency seashore. The method used in this research is Line Intercept Transect (LIT method. LIT method is a method that used to decide benthic community on coral reef based on percentage growth, and to take note of benthic quantity along transect line. Percentage of living coral coverage in 3 meters depth on this Binor Paiton seashore that may be categorized in a good condition is 57,65%. While the rest are dead coral that is only 1,45%, other life form in 23,2%, and non-life form in 17,7%. A good condition of coral reef is caused by coral reef transplantation on the seashore, so this coral reef is dominated by Acropora Branching. On the other hand, Mortality Index (IM of coral reef resulted in 24,5%. The result from observation and calculation of coral reef is dominated by Hard Coral in Acropora Branching (ACB with coral reef coverage percentage of 39%, Coral Massive (CM with coral reef coverage percentage of 2,85%, Coral Foliose (CF with coral reef coverage percentage of 1,6%, and Coral Mushroom (CRM with coral reef coverage percentage of 8,5%. Observation in 10 meters depth

  16. Regularization of Nonmonotone Variational Inequalities

    International Nuclear Information System (INIS)

    Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.

    2006-01-01

    In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems

  17. Lattice regularized chiral perturbation theory

    International Nuclear Information System (INIS)

    Borasoy, Bugra; Lewis, Randy; Ouimet, Pierre-Philippe A.

    2004-01-01

    Chiral perturbation theory can be defined and regularized on a spacetime lattice. A few motivations are discussed here, and an explicit lattice Lagrangian is reviewed. A particular aspect of the connection between lattice chiral perturbation theory and lattice QCD is explored through a study of the Wess-Zumino-Witten term

  18. 76 FR 3629 - Regular Meeting

    Science.gov (United States)

    2011-01-20

    ... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630

  19. Forcing absoluteness and regularity properties

    NARCIS (Netherlands)

    Ikegami, D.

    2010-01-01

    For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.

  20. Globals of Completely Regular Monoids

    Institute of Scientific and Technical Information of China (English)

    Wu Qian-qian; Gan Ai-ping; Du Xian-kun

    2015-01-01

    An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.

  1. Fluid queues and regular variation

    NARCIS (Netherlands)

    Boxma, O.J.

    1996-01-01

    This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even

  2. Fluid queues and regular variation

    NARCIS (Netherlands)

    O.J. Boxma (Onno)

    1996-01-01

    textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail

  3. Empirical laws, regularity and necessity

    NARCIS (Netherlands)

    Koningsveld, H.

    1973-01-01

    In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.

    1 am referring especially to two well-known views, viz. the regularity and

  4. Interval matrices: Regularity generates singularity

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří; Shary, S.P.

    2018-01-01

    Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016

  5. Regularization in Matrix Relevance Learning

    NARCIS (Netherlands)

    Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael

    A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can

  6. Regular and conformal regular cores for static and rotating solutions

    Energy Technology Data Exchange (ETDEWEB)

    Azreg-Aïnou, Mustapha

    2014-03-07

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  7. Regular and conformal regular cores for static and rotating solutions

    International Nuclear Information System (INIS)

    Azreg-Aïnou, Mustapha

    2014-01-01

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  8. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  9. Physical model of dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Schonfeld, Jonathan F.

    2016-12-15

    We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)

  10. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  11. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  12. Regularized strings with extrinsic curvature

    International Nuclear Information System (INIS)

    Ambjoern, J.; Durhuus, B.

    1987-07-01

    We analyze models of discretized string theories, where the path integral over world sheet variables is regularized by summing over triangulated surfaces. The inclusion of curvature in the action is a necessity for the scaling of the string tension. We discuss the physical properties of models with extrinsic curvature terms in the action and show that the string tension vanishes at the critical point where the bare extrinsic curvature coupling tends to infinity. Similar results are derived for models with intrinsic curvature. (orig.)

  13. Circuit complexity of regular languages

    Czech Academy of Sciences Publication Activity Database

    Koucký, Michal

    2009-01-01

    Roč. 45, č. 4 (2009), s. 865-879 ISSN 1432-4350 R&D Projects: GA ČR GP201/07/P276; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : regular languages * circuit complexity * upper and lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.726, year: 2009

  14. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  15. Population-Attributable Risk Percentages for Racialized Risk Environments

    Science.gov (United States)

    Arriola, Kimberly Jacob; Haardörfer, Regine; McBride, Colleen M.

    2016-01-01

    Research about relationships between place characteristics and racial/ethnic inequities in health has largely ignored conceptual advances about race and place within the discipline of geography. Research has also almost exclusively quantified these relationships using effect estimates (e.g., odds ratios), statistics that fail to adequately capture the full impact of place characteristics on inequities and thus undermine our ability to translate research into action. We draw on geography to further develop the concept of “racialized risk environments,” and we argue for the routine calculation of race/ethnicity-specific population-attributable risk percentages. PMID:27552263

  16. Regularized Statistical Analysis of Anatomy

    DEFF Research Database (Denmark)

    Sjöstrand, Karl

    2007-01-01

    This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....

  17. Regularization methods in Banach spaces

    CERN Document Server

    Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S

    2012-01-01

    Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B

  18. Academic Training Lecture - Regular Programme

    CERN Multimedia

    PH Department

    2011-01-01

    Regular Lecture Programme 9 May 2011 ACT Lectures on Detectors - Inner Tracking Detectors by Pippa Wells (CERN) 10 May 2011 ACT Lectures on Detectors - Calorimeters (2/5) by Philippe Bloch (CERN) 11 May 2011 ACT Lectures on Detectors - Muon systems (3/5) by Kerstin Hoepfner (RWTH Aachen) 12 May 2011 ACT Lectures on Detectors - Particle Identification and Forward Detectors by Peter Krizan (University of Ljubljana and J. Stefan Institute, Ljubljana, Slovenia) 13 May 2011 ACT Lectures on Detectors - Trigger and Data Acquisition (5/5) by Dr. Brian Petersen (CERN) from 11:00 to 12:00 at CERN ( Bldg. 222-R-001 - Filtration Plant )

  19. RES: Regularized Stochastic BFGS Algorithm

    Science.gov (United States)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  20. Regularized Label Relaxation Linear Regression.

    Science.gov (United States)

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu

    2018-04-01

    Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.

  1. The percentage of migration as indicator of femoral head position

    International Nuclear Information System (INIS)

    Ekloef, O.; Ringertz, H.; Samuelsson, L.; Karolinska Sjukhuset, Stockholm; Karolinska Sjukhuset, Stockholm

    1988-01-01

    In childhood subluxation of one or both hips may develop rather insidiously. For lack of generally accepted objective methods of assessment, ambiguous interpretations of findings in serial examinations are common. Many subluxations are overlooked during the early stages. In order to overcome such disadvantages, determination of the percentage of migration seems to be a reasonably easy and reliable technique facilitating evaluation of impending dislocation. This investigation was carried out in order to estabilsh norms applicable to patients in the pediatric age interval. The 98th percentile of migration increases with age from 16% in patients < 4 years of age to 24% in patients ≥ 12 years. Higher figures represent subluxation. If the migration exceeds 80% a manifest luxation is present. A difference in migration between the two hips larger than 12% indicates abnormality calling for clinical and radiologic follow-up. (orig.)

  2. Budgetary Approach to Project Management by Percentage of Completion Method

    Directory of Open Access Journals (Sweden)

    Leszek Borowiec

    2011-07-01

    Full Text Available Efficient and effective project management process is made possible by the use of methods and techniques of project management. The aim of this paper is to present the problems of project management by using Percentage of Completion method. The research material was gathered based on the experience in implementing this method by the Johnson Controls International Company. The article attempts to demonstrate the validity of the thesis that the POC project management method, allows for effective implementation and monitoring of the project and thus is an effective tool in the managing of companies which exploit the budgetary approach. The study presents planning process of basic parameters affecting the effectiveness of the project (such as costs, revenue, margin and characterized how the primary measurements used to evaluate it. The present theme is illustrating by numerous examples for showing the essence of the raised problems and the results are presenting by using descriptive methods, graphical and tabular.

  3. From inactive to regular jogger

    DEFF Research Database (Denmark)

    Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup

    study was conducted using individual semi-structured interviews on how a successful long-term behavior change had been achieved. Ten informants were purposely selected from participants in the DANO-RUN research project (7 men, 3 women, average age 41.5). Interviews were performed on the basis of Theory...... of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...

  4. Tessellating the Sphere with Regular Polygons

    Science.gov (United States)

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  5. On the equivalence of different regularization methods

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1985-01-01

    The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)

  6. The uniqueness of the regularization procedure

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1981-01-01

    On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)

  7. Application of Turchin's method of statistical regularization

    Science.gov (United States)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  8. Regular extensions of some classes of grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular

  9. Used percentage veto for LIGO and virgo binary inspiral searches

    International Nuclear Information System (INIS)

    Isogai, Tomoki

    2010-01-01

    A challenge for ground-based gravitational wave detectors such as LIGO and Virgo is to understand the origin of non-astrophysical transients that contribute to the background noise, obscuring real astrophysically produced signals. To help this effort, there are a number of environmental and instrumental sensors around the site, recording data in 'channels'. We developed a method called the used percentage veto to eliminate corrupted data based on the statistical correlation between transients in the gravitational wave channel and in the auxiliary channels. The results are used to improve inspiral binary searches on LIGO and Virgo data. We also developed a way to apply this method to help find the physical origin of such transients for detector characterization. After identifying statistically correlated channels, a follow-up code clusters coincident events between the gravitational wave channel and auxiliary channels, and thereby classifies noise by correlated channels. For each selected event, the code also gathers and creates information that is helpful for further investigations. The method is contributing to identifying problems and improving data quality for the LIGO S6 and Virgo VSR2 science runs.

  10. Percentage depth dose evaluation in heterogeneous media using thermoluminescent dosimetry

    Science.gov (United States)

    da Rosa, L.A.R.; Campos, L.T.; Alves, V.G.L.; Batista, D.V.S.; Facure, A.

    2010-01-01

    The purpose of this study is to investigate the influence of lung heterogeneity inside a soft tissue phantom on percentage depth dose (PDD). PDD curves were obtained experimentally using LiF:Mg,Ti (TLD‐100) thermoluminescent detectors and applying Eclipse treatment planning system algorithms Batho, modified Batho (M‐Batho or BMod), equivalent TAR (E‐TAR or EQTAR), and anisotropic analytical algorithm (AAA) for a 15 MV photon beam and field sizes of 1×1,2×2,5×5, and 10×10cm2. Monte Carlo simulations were performed using the DOSRZnrc user code of EGSnrc. The experimental results agree with Monte Carlo simulations for all irradiation field sizes. Comparisons with Monte Carlo calculations show that the AAA algorithm provides the best simulations of PDD curves for all field sizes investigated. However, even this algorithm cannot accurately predict PDD values in the lung for field sizes of 1×1 and 2×2cm2. An overdosage in the lung of about 40% and 20% is calculated by the AAA algorithm close to the interface soft tissue/lung for 1×1 and 2×2cm2 field sizes, respectively. It was demonstrated that differences of 100% between Monte Carlo results and the algorithms Batho, modified Batho, and equivalent TAR responses may exist inside the lung region for the 1×1cm2 field. PACS number: 87.55.kd

  11. Body Fat Percentage Prediction Using Intelligent Hybrid Approaches

    Directory of Open Access Journals (Sweden)

    Yuehjen E. Shao

    2014-01-01

    Full Text Available Excess of body fat often leads to obesity. Obesity is typically associated with serious medical diseases, such as cancer, heart disease, and diabetes. Accordingly, knowing the body fat is an extremely important issue since it affects everyone’s health. Although there are several ways to measure the body fat percentage (BFP, the accurate methods are often associated with hassle and/or high costs. Traditional single-stage approaches may use certain body measurements or explanatory variables to predict the BFP. Diverging from existing approaches, this study proposes new intelligent hybrid approaches to obtain fewer explanatory variables, and the proposed forecasting models are able to effectively predict the BFP. The proposed hybrid models consist of multiple regression (MR, artificial neural network (ANN, multivariate adaptive regression splines (MARS, and support vector regression (SVR techniques. The first stage of the modeling includes the use of MR and MARS to obtain fewer but more important sets of explanatory variables. In the second stage, the remaining important variables are served as inputs for the other forecasting methods. A real dataset was used to demonstrate the development of the proposed hybrid models. The prediction results revealed that the proposed hybrid schemes outperformed the typical, single-stage forecasting models.

  12. Utility of Immature Granulocyte Percentage in Pediatric Appendicitis

    Science.gov (United States)

    Mathews, Eleanor K.; Griffin, Russell L.; Mortellaro, Vincent; Beierle, Elizabeth A.; Harmon, Carroll M.; Chen, Mike K.; Russell, Robert T.

    2014-01-01

    Background Acute appendicitis is the most common cause of abdominal surgery in children. Adjuncts are utilized to help clinicians predict acute or perforated appendicitis, which may affect treatment decisions. Automated hematologic analyzers can perform more accurate automated differentials including immature granulocyte percentages (IG%). Elevated IG% has demonstrated improved accuracy for predicting sepsis in the neonatal population than traditional immature to total neutrophil count (I/T) ratios. We intended to assess the additional discriminatory ability of IG% to traditionally assessed parameters in the differentiation between acute and perforated appendicitis. Materials and Methods We identified all patients with appendicitis from July 2012 to June 2013 by ICD-9 code. Charts were reviewed for relevant demographic, clinical, and outcome data, which were compared between acute and perforated appendicitis groups using Fischer’s exact and t-test for categorical and continuous variables, respectively. We utilized an adjusted logistic regression model utilizing clinical lab values to predict the odds of perforated appendicitis. Results 251 patients were included in the analysis. Those with perforated appendicitis had a higher white blood cell (WBC) count (p=0.0063), C-reactive protein (CRP) (pappendicitis. The c-statistic of the final model was 0.70, suggesting fair discriminatory ability in predicting perforated appendicitis. Conclusions IG% did not provide any additional benefit to elevated CRP and presence of left shift in the differentiation between acute and perforated appendicitis. PMID:24793450

  13. [Sedentary lifestyle: physical activity duration versus percentage of energy expenditure].

    Science.gov (United States)

    Cabrera de León, Antonio; Rodríguez-Pérez, María del C; Rodríguez-Benjumeda, Luis M; Anía-Lafuente, Basilio; Brito-Díaz, Buenaventura; Muros de Fuentes, Mercedes; Almeida-González, Delia; Batista-Medina, Marta; Aguirre-Jaime, Armando

    2007-03-01

    To compare different definitions of a sedentary lifestyle and to determine which is the most appropriate for demonstrating its relationship with the metabolic syndrome and other cardiovascular risk factors. A cross-sectional study of 5814 individuals was carried out. Comparisons were made between two definitions of a sedentary lifestyle: one based on active energy expenditure being less than 10% of total energy expenditure, and the other, on performing less than 25-30 minutes of physical activity per day. Reported levels of physical activity, anthropometric measurements, and biochemical markers of cardiovascular risk were recorded. The associations between a sedentary lifestyle and metabolic syndrome and other risk factors were adjusted for gender, age and tobacco use. The prevalence of a sedentary lifestyle was higher in women (70%) than in men (45-60%, according to the definition used). The definitions based on physical activity duration and on energy expenditure were equally useful: there were direct associations between a sedentary lifestyle and metabolic syndrome, body mass index, abdominal and pelvic circumferences, systolic blood pressure, heart rate, apolipoprotein B, and triglycerides, and inverse associations with high-density lipoprotein cholesterol and paraoxonase activity, which demonstrated the greatest percentage difference between sedentary and active individuals. An incidental finding was that both definitions of a sedentary lifestyle were more strongly associated with the metabolic syndrome as defined by International Diabetes Federation criteria than by Adult Treatment Panel III criteria. Given that it is relatively easy to determine whether a patient performs less than 25 minutes of physical activity per day, use of this definition of a sedentary lifestyle is recommended for clinical practice. The serum paraoxonase activity level could provide a useful marker for studying sedentary lifestyles.

  14. Class of regular bouncing cosmologies

    Science.gov (United States)

    Vasilić, Milovan

    2017-06-01

    In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.

  15. Do the majority of South Africans regularly consult traditional healers?

    Directory of Open Access Journals (Sweden)

    Gabriel Louw

    2016-12-01

    Full Text Available Background The statutory recognition of traditional healers as healthcare practitioners in South Africa in terms of the Traditional Health Practitioners Act 22 of 2007 is based on various assumptions, opinions and generalizations. One of the prominent views is that the majority of South Africans regularly consult traditional healers. It even has been alleged that this number can be as high as 80 per cent of the South African population. For medical doctors and other health practitioners registered with the Health Professions Council of South Africa (HPCSA, this new statutory status of traditional health practitioners, means the required presence of not only a healthcare competitor that can overstock the healthcare market with service lending, medical claims and healthcare costs, but also a competitor prone to malpractice. Aims The study aimed to determine if the majority of South Africans regularly consult traditional healers. Methods This is an exploratory and descriptive study following the modern historical approach of investigation and literature review. The emphasis is on using current documentation like articles, books and newspapers, as primary sources to determine if the majority of South Africans regularly consult traditional healers. The findings are offered in narrative form. Results It is clear that there is no trustworthy statistics on the percentages of South Africans using traditional healers. A scientific survey is needed to determine the extent to which traditional healers are consulted. This will only be possible after the Traditional Health Practitioners Act No 22 has been fully enacted and traditional health practitioners have become fully active in the healthcare sector. Conclusion In poorer, rural areas no more than 11.2 per cent of the South African population regularly consult traditional healers, while the figure for the total population seems to be no more than 1.4 per cent. The argument that the majority of South

  16. Glaucoma screening during regular optician visits : can the population at risk of developing glaucoma be reached?

    NARCIS (Netherlands)

    Stoutenbeek, R.; Jansonius, N. M.

    2006-01-01

    Aim: To determine the percentage of the population at risk of developing glaucoma, which can potentially be reached by conducting glaucoma screening during regular optician visits. Methods: 1200 inhabitants aged > 40 years were randomly selected from Dutch community population databases. A

  17. Antimicrobial Resistance Percentages of Salmonella and Shigella in Seafood Imported to Jordan: Higher Percentages and More Diverse Profiles in Shigella.

    Science.gov (United States)

    Obaidat, Mohammad M; Bani Salman, Alaa E

    2017-03-01

    This study determined the prevalence and antimicrobial resistance of human-specific ( Shigella spp.) and zoonotic ( Salmonella enterica ) foodborne pathogens in internationally traded seafood. Sixty-four Salmonella and 61 Shigella isolates were obtained from 330 imported fresh fish samples from Egypt, Yemen, and India. The pathogens were isolated on selective media, confirmed by PCR, and tested for antimicrobial resistance. Approximately 79 and 98% of the Salmonella and Shigella isolates, respectively, exhibited resistance to at least one antimicrobial, and 8 and 49% exhibited multidrug resistance (resistance to three or more antimicrobial classes). Generally, Salmonella exhibited high resistance to amoxicillin-clavulanic acid, cephalothin, streptomycin, and ampicillin; very low resistance to kanamycin, tetracycline, gentamicin, chloramphenicol, nalidixic acid, sulfamethoxazole-trimethoprim, and ciprofloxacin; and no resistance to ceftriaxone. Meanwhile, Shigella spp. exhibited high resistance to tetracycline, amoxicillin-clavulanic acid, cephalothin, streptomycin, and ampicillin; low resistance to kanamycin, nalidixic acid, sulfamethoxazole-trimethoprim, and ceftriaxone; and very low resistance to gentamicin and ciprofloxacin. Salmonella isolates exhibited 14 resistance profiles, Shigella isolates 42. This study is novel in showing that a human-specific pathogen has higher antimicrobial resistance percentages and more diverse profiles than a zoonotic pathogen. Thus, the impact of antimicrobial use in humans is as significant as, if not more significant than, it is in animals in spreading antibiotic resistance through food. This study also demonstrates that locally derived antimicrobial resistance can spread and pose a public health risk worldwide through seafood trade and that high resistance would make a possible outbreak difficult to control. So, capacity building and monitoring harvest water areas are encouraged in fish producing countries.

  18. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  19. Higher derivative regularization and chiral anomaly

    International Nuclear Information System (INIS)

    Nagahama, Yoshinori.

    1985-02-01

    A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)

  20. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  1. Regularity effect in prospective memory during aging

    OpenAIRE

    Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique

    2016-01-01

    Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...

  2. Regularization and error assignment to unfolded distributions

    CERN Document Server

    Zech, Gunter

    2011-01-01

    The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.

  3. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  4. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  5. Preference mapping of lemon lime carbonated beverages with regular and diet beverage consumers.

    Science.gov (United States)

    Leksrisompong, P P; Lopetcharat, K; Guthrie, B; Drake, M A

    2013-02-01

    The drivers of liking of lemon-lime carbonated beverages were investigated with regular and diet beverage consumers. Ten beverages were selected from a category survey of commercial beverages using a D-optimal procedure. Beverages were subjected to consumer testing (n = 101 regular beverage consumers, n = 100 diet beverage consumers). Segmentation of consumers was performed on overall liking scores followed by external preference mapping of selected samples. Diet beverage consumers liked 2 diet beverages more than regular beverage consumers. There were no differences in the overall liking scores between diet and regular beverage consumers for other products except for a sparkling beverage sweetened with juice which was more liked by regular beverage consumers. Three subtle but distinct consumer preference clusters were identified. Two segments had evenly distributed diet and regular beverage consumers but one segment had a greater percentage of regular beverage consumers (P beverage consumers) did not have a large impact on carbonated beverage liking. Instead, mouthfeel attributes were major drivers of liking when these beverages were tested in a blind tasting. Preference mapping of lemon-lime carbonated beverage with diet and regular beverage consumers allowed the determination of drivers of liking of both populations. The understanding of how mouthfeel attributes, aromatics, and basic tastes impact liking or disliking of products was achieved. Preference drivers established in this study provide product developers of carbonated lemon-lime beverages with additional information to develop beverages that may be suitable for different groups of consumers. © 2013 Institute of Food Technologists®

  6. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  7. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  8. On infinite regular and chiral maps

    OpenAIRE

    Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán

    2015-01-01

    We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.

  9. From recreational to regular drug use

    DEFF Research Database (Denmark)

    Järvinen, Margaretha; Ravn, Signe

    2011-01-01

    This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...

  10. Automating InDesign with Regular Expressions

    CERN Document Server

    Kahrel, Peter

    2006-01-01

    If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.

  11. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  12. 29 CFR 779.18 - Regular rate.

    Science.gov (United States)

    2010-07-01

    ... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  13. Near infrared reactance for the estimation of body fatness in regularly exercising individuals.

    Science.gov (United States)

    Evans, J; Lambert, M I; Micklesfield, L K; Goedecke, J H; Jennings, C L; Savides, L; Claassen, A; Lambert, E V

    2013-07-01

    Near infrared reactance (NIR) is used to measure body fat percentage (BF%), but there is little data on its use in non-obese, regularly exercising individuals. Therefore, this study aimed to examine the limits of agreement between NIR compared to dual x-ray absorptiometry (DXA) for the measurement of BF% in 2 cohorts of regularly exercising individuals. BF% was measured using DXA and NIR in a regular exercising (≥3 sessions/week), healthy active cohort (HA; n=57), and in a regularly exercising and resistance trained (≥2 sessions/week) cohort (RT; n=59). The RT cohort had lower BF% than the HA cohort (15.3±5.5% and 25.8±7.1%, Pexercising individuals. However, the rather broad LOA of NIR need to be considered when using NIR to screen for overweight and obesity, or measure and track changes in body composition. © Georg Thieme Verlag KG Stuttgart · New York.

  14. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  15. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  16. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  17. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  18. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  19. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  20. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  1. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  2. J-regular rings with injectivities

    OpenAIRE

    Shen, Liang

    2010-01-01

    A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.

  3. 26 CFR 1.1502-44 - Percentage depletion for independent producers and royalty owners.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Percentage depletion for independent producers...-44 Percentage depletion for independent producers and royalty owners. (a) In general. The sum of the percentage depletion deductions for the taxable year for all oil or gas property owned by all members, plus...

  4. 7 CFR 51.308 - Methods of sampling and calculation of percentages.

    Science.gov (United States)

    2010-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and calculation of percentages. (a) When the numerical... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of sampling and calculation of percentages. 51...

  5. Generalized regular genus for manifolds with boundary

    Directory of Open Access Journals (Sweden)

    Paola Cristofori

    2003-05-01

    Full Text Available We introduce a generalization of the regular genus, a combinatorial invariant of PL manifolds ([10], which is proved to be strictly related, in dimension three, to generalized Heegaard splittings defined in [12].

  6. Geometric regularizations and dual conifold transitions

    International Nuclear Information System (INIS)

    Landsteiner, Karl; Lazaroiu, Calin I.

    2003-01-01

    We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)

  7. Fast and compact regular expression matching

    DEFF Research Database (Denmark)

    Bille, Philip; Farach-Colton, Martin

    2008-01-01

    We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....

  8. Regular-fat dairy and human health

    DEFF Research Database (Denmark)

    Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas

    2016-01-01

    In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....

  9. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  10. Regularities of intermediate adsorption complex relaxation

    International Nuclear Information System (INIS)

    Manukova, L.A.

    1982-01-01

    The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained

  11. Online Manifold Regularization by Dual Ascending Procedure

    OpenAIRE

    Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui

    2013-01-01

    We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...

  12. Improvements in GRACE Gravity Fields Using Regularization

    Science.gov (United States)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  13. Regular Expression Matching and Operational Semantics

    Directory of Open Access Journals (Sweden)

    Asiri Rathnayake

    2011-08-01

    Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.

  14. Regularities, Natural Patterns and Laws of Nature

    Directory of Open Access Journals (Sweden)

    Stathis Psillos

    2014-02-01

    Full Text Available  The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology.  Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.

  15. Fractional Regularization Term for Variational Image Registration

    Directory of Open Access Journals (Sweden)

    Rafael Verdú-Monedero

    2009-01-01

    Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.

  16. Regular non-twisting S-branes

    International Nuclear Information System (INIS)

    Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.

    2004-01-01

    We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)

  17. Online Manifold Regularization by Dual Ascending Procedure

    Directory of Open Access Journals (Sweden)

    Boliang Sun

    2013-01-01

    Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.

  18. Regular transport dynamics produce chaotic travel times.

    Science.gov (United States)

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  19. Regularity of difference equations on Banach spaces

    CERN Document Server

    Agarwal, Ravi P; Lizama, Carlos

    2014-01-01

    This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.

  20. PET regularization by envelope guided conjugate gradients

    International Nuclear Information System (INIS)

    Kaufman, L.; Neumaier, A.

    1996-01-01

    The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations

  1. Matrix regularization of embedded 4-manifolds

    International Nuclear Information System (INIS)

    Trzetrzelewski, Maciej

    2012-01-01

    We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).

  2. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  3. Appeals to AC as a Percentage of Appealable Hearing Level Dispositions

    Data.gov (United States)

    Social Security Administration — Longitudinal report detailing the numbers and percentages of Requests for Review (RR) of hearing level decisions or dismissals filed with the Appeals Council (AC)...

  4. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  5. Regularity and irreversibility of weekly travel behavior

    NARCIS (Netherlands)

    Kitamura, R.; van der Hoorn, A.I.J.M.

    1987-01-01

    Dynamic characteristics of travel behavior are analyzed in this paper using weekly travel diaries from two waves of panel surveys conducted six months apart. An analysis of activity engagement indicates the presence of significant regularity in weekly activity participation between the two waves.

  6. Regular and context-free nominal traces

    DEFF Research Database (Denmark)

    Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca

    2017-01-01

    Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...

  7. Faster 2-regular information-set decoding

    NARCIS (Netherlands)

    Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.

    2011-01-01

    Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and

  8. Complexity in union-free regular languages

    Czech Academy of Sciences Publication Activity Database

    Jirásková, G.; Masopust, Tomáš

    2011-01-01

    Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free-path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html

  9. Regular Gleason Measures and Generalized Effect Algebras

    Science.gov (United States)

    Dvurečenskij, Anatolij; Janda, Jiří

    2015-12-01

    We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.

  10. Regularization of finite temperature string theories

    International Nuclear Information System (INIS)

    Leblanc, Y.; Knecht, M.; Wallet, J.C.

    1990-01-01

    The tachyonic divergences occurring in the free energy of various string theories at finite temperature are eliminated through the use of regularization schemes and analytic continuations. For closed strings, we obtain finite expressions which, however, develop an imaginary part above the Hagedorn temperature, whereas open string theories are still plagued with dilatonic divergences. (orig.)

  11. A Sim(2 invariant dimensional regularization

    Directory of Open Access Journals (Sweden)

    J. Alfaro

    2017-09-01

    Full Text Available We introduce a Sim(2 invariant dimensional regularization of loop integrals. Then we can compute the one loop quantum corrections to the photon self energy, electron self energy and vertex in the Electrodynamics sector of the Very Special Relativity Standard Model (VSRSM.

  12. Continuum regularized Yang-Mills theory

    International Nuclear Information System (INIS)

    Sadun, L.A.

    1987-01-01

    Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions

  13. Gravitational lensing by a regular black hole

    International Nuclear Information System (INIS)

    Eiroa, Ernesto F; Sendra, Carlos M

    2011-01-01

    In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.

  14. Gravitational lensing by a regular black hole

    Energy Technology Data Exchange (ETDEWEB)

    Eiroa, Ernesto F; Sendra, Carlos M, E-mail: eiroa@iafe.uba.ar, E-mail: cmsendra@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)

    2011-04-21

    In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.

  15. Analytic stochastic regularization and gange invariance

    International Nuclear Information System (INIS)

    Abdalla, E.; Gomes, M.; Lima-Santos, A.

    1986-05-01

    A proof that analytic stochastic regularization breaks gauge invariance is presented. This is done by an explicit one loop calculation of the vaccum polarization tensor in scalar electrodynamics, which turns out not to be transversal. The counterterm structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization, are also analysed. (Author) [pt

  16. Annotation of regular polysemy and underspecification

    DEFF Research Database (Denmark)

    Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria

    2013-01-01

    We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...

  17. Stabilization, pole placement, and regular implementability

    NARCIS (Netherlands)

    Belur, MN; Trentelman, HL

    In this paper, we study control by interconnection of linear differential systems. We give necessary and sufficient conditions for regular implementability of a-given linear, differential system. We formulate the problems of stabilization and pole placement as problems of finding a suitable,

  18. 12 CFR 725.3 - Regular membership.

    Science.gov (United States)

    2010-01-01

    ... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit....5(b) of this part, and forwarding with its completed application funds equal to one-half of this... 1, 1979, is not required to forward these funds to the Facility until October 1, 1979. (3...

  19. Supervised scale-regularized linear convolutionary filters

    DEFF Research Database (Denmark)

    Loog, Marco; Lauze, Francois Bernard

    2017-01-01

    also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...

  20. On regular riesz operators | Raubenheimer | Quaestiones ...

    African Journals Online (AJOL)

    The r-asymptotically quasi finite rank operators on Banach lattices are examples of regular Riesz operators. We characterise Riesz elements in a subalgebra of a Banach algebra in terms of Riesz elements in the Banach algebra. This enables us to characterise r-asymptotically quasi finite rank operators in terms of adjoint ...

  1. Regularized Discriminant Analysis: A Large Dimensional Study

    KAUST Repository

    Yang, Xiaoke

    2018-04-28

    In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).

  2. Complexity in union-free regular languages

    Czech Academy of Sciences Publication Activity Database

    Jirásková, G.; Masopust, Tomáš

    2011-01-01

    Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free- path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html

  3. Bit-coded regular expression parsing

    DEFF Research Database (Denmark)

    Nielsen, Lasse; Henglein, Fritz

    2011-01-01

    the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...

  4. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  5. Effect of breed and non-genetic factors on percentage milk ...

    African Journals Online (AJOL)

    This study was done to determine the effect of breed and non-genetic factors on percentage milk composition of smallholders' dual-purpose cattle on-farm in the Ashanti Region. Fresh milk samples from various breeds of cows were assessed for percentage components of protein, fat, lactose, cholesterol, solidnon- fat and ...

  6. 12 CFR Appendix A to Part 230 - Annual Percentage Yield Calculation

    Science.gov (United States)

    2010-01-01

    ... following simple formula: APY=100 (Interest/Principal) Examples (1) If an institution pays $61.68 in... percentage yield is 5.39%, using the simple formula: APY=100(134.75/2,500) APY=5.39% For $15,000, interest is... Yield Calculation The annual percentage yield measures the total amount of interest paid on an account...

  7. 7 CFR 981.47 - Method of establishing salable and reserve percentages.

    Science.gov (United States)

    2010-01-01

    ...) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF... effectuate the declared policy of the act, he shall designate such percentages. Except as provided in § 981... percentages, the Secretary shall give consideration to the ratio of estimated trade demand (domestic plus...

  8. High body fat percentage among adult women in Malaysia: the role ...

    African Journals Online (AJOL)

    Body fat percentage is regarded as an important measurement for diagnosis of obesity. The aim of this study is to determine the association of high body fat percentage (BF%) and lifestyle among adult women. The study was conducted on 327 women, aged 40-59 years, recruited during a health screening program. Data on ...

  9. 13 CFR 126.701 - Can these subcontracting percentages requirements change?

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Can these subcontracting percentages requirements change? 126.701 Section 126.701 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION HUBZONE PROGRAM Contract Performance Requirements § 126.701 Can these subcontracting percentages...

  10. Infants with Down Syndrome: Percentage and Age for Acquisition of Gross Motor Skills

    Science.gov (United States)

    Pereira, Karina; Basso, Renata Pedrolongo; Lindquist, Ana Raquel Rodrigues; da Silva, Louise Gracelli Pereira; Tudella, Eloisa

    2013-01-01

    The literature is bereft of information about the age at which infants with Down syndrome (DS) acquire motor skills and the percentage of infants that do so by the age of 12 months. Therefore, it is necessary to identify the difference in age, in relation to typical infants, at which motor skills were acquired and the percentage of infants with DS…

  11. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  12. EFFECT OF ADDING AN EXERCISE REGIMEN TO DIET THERAPY IN DECREASING BODY FAT PERCENTAGE AND BODY MASS INDEX AMONG OBESE FEMALES

    Directory of Open Access Journals (Sweden)

    Rajeena Haneefa

    2017-10-01

    Full Text Available BACKGROUND Obesity is one among the leading health problems in many developing countries including India. Lifestyle modifications, which include diet therapy and regular exercises are considered as the mainstay in the management of this health issue. Brisk walking is the preferred socially and economically acceptable mode of exercise. This randomised controlled trial tries to evaluate the efficacy of adding an exercise regimen to diet therapy in reducing body fat percentage and Body Mass Index (BMI among obese females. MATERIALS AND METHODS One hundred female patients aged between 20 and 60 years with BMI greater than 25 were recruited for this study of 6 months duration. Participants were randomised into either diet therapy alone group or diet therapy with exercise group. All participants were prescribed a low-calorie diet of 1500 kcal per day. The exercise intervention group was subjected to a home-based exercise regimen; walking for 30 minutes 5 days a week. Outcomes were measured by BMI and body fat percentage, documented every month. RESULTS Both groups showed significant reduction in body fat percentage and BMI, but the reduction was more in the exercise with diet therapy group (p value <0.001. CONCLUSION Adding a simple exercise like walking to other lifestyle modification measures can more efficiently bring down BMI and body fat percentage in turn significantly reducing the cardiovascular risk, morbidity and mortality in women.

  13. Extreme values, regular variation and point processes

    CERN Document Server

    Resnick, Sidney I

    1987-01-01

    Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...

  14. Stream Processing Using Grammars and Regular Expressions

    DEFF Research Database (Denmark)

    Rasmussen, Ulrik Terp

    disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...

  15. Describing chaotic attractors: Regular and perpetual points

    Science.gov (United States)

    Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz

    2018-03-01

    We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.

  16. Chaos regularization of quantum tunneling rates

    International Nuclear Information System (INIS)

    Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward

    2011-01-01

    Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.

  17. Least square regularized regression in sum space.

    Science.gov (United States)

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  18. Contour Propagation With Riemannian Elasticity Regularization

    DEFF Research Database (Denmark)

    Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.

    2011-01-01

    Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...

  19. Thin accretion disk around regular black hole

    Directory of Open Access Journals (Sweden)

    QIU Tianqi

    2014-08-01

    Full Text Available The Penrose′s cosmic censorship conjecture says that naked singularities do not exist in nature.So,it seems reasonable to further conjecture that not even a singularity exists in nature.In this paper,a regular black hole without singularity is studied in detail,especially on its thin accretion disk,energy flux,radiation temperature and accretion efficiency.It is found that the interaction of regular black hole is stronger than that of the Schwarzschild black hole. Furthermore,the thin accretion will be more efficiency to lost energy while the mass of black hole decreased. These particular properties may be used to distinguish between black holes.

  20. Convex nonnegative matrix factorization with manifold regularization.

    Science.gov (United States)

    Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong

    2015-03-01

    Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Automated objective determination of percentage of malignant nuclei for mutation testing.

    Science.gov (United States)

    Viray, Hollis; Coulter, Madeline; Li, Kevin; Lane, Kristin; Madan, Aruna; Mitchell, Kisha; Schalper, Kurt; Hoyt, Clifford; Rimm, David L

    2014-01-01

    Detection of DNA mutations in tumor tissue can be a critical companion diagnostic test before prescription of a targeted therapy. Each method for detection of these mutations is associated with an analytic sensitivity that is a function of the percentage of tumor cells present in the specimen. Currently, tumor cell percentage is visually estimated resulting in an ordinal and highly variant result for a biologically continuous variable. We proposed that this aspect of DNA mutation testing could be standardized by developing a computer algorithm capable of accurately determining the percentage of malignant nuclei in an image of a hematoxylin and eosin-stained tissue. Using inForm software, we developed an algorithm, to calculate the percentage of malignant cells in histologic specimens of colon adenocarcinoma. A criterion standard was established by manually counting malignant and benign nuclei. Three pathologists also estimated the percentage of malignant nuclei in each image. Algorithm #9 had a median deviation from the criterion standard of 5.4% on the training set and 6.2% on the validation set. Compared with pathologist estimation, Algorithm #9 showed a similar ability to determine percentage of malignant nuclei. This method represents a potential future tool to assist in determining the percent of malignant nuclei present in a tissue section. Further validation of this algorithm or an improved algorithm may have value to more accurately assess percentage of malignant cells for companion diagnostic mutation testing.

  2. A short proof of increased parabolic regularity

    Directory of Open Access Journals (Sweden)

    Stephen Pankavich

    2015-08-01

    Full Text Available We present a short proof of the increased regularity obtained by solutions to uniformly parabolic partial differential equations. Though this setting is fairly introductory, our new method of proof, which uses a priori estimates and an inductive method, can be extended to prove analogous results for problems with time-dependent coefficients, advection-diffusion or reaction diffusion equations, and nonlinear PDEs even when other tools, such as semigroup methods or the use of explicit fundamental solutions, are unavailable.

  3. Regular black hole in three dimensions

    OpenAIRE

    Myung, Yun Soo; Yoon, Myungseok

    2008-01-01

    We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.

  4. Sparse regularization for force identification using dictionaries

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  5. Analytic stochastic regularization and gauge theories

    International Nuclear Information System (INIS)

    Abdalla, E.; Gomes, M.; Lima-Santos, A.

    1987-04-01

    We prove that analytic stochatic regularization braks gauge invariance. This is done by an explicit one loop calculation of the two three and four point vertex functions of the gluon field in scalar chromodynamics, which turns out not to be geuge invariant. We analyse the counter term structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization. (author) [pt

  6. Preconditioners for regularized saddle point matrices

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe

    2011-01-01

    Roč. 19, č. 2 (2011), s. 91-112 ISSN 1570-2820 Institutional research plan: CEZ:AV0Z30860518 Keywords : saddle point matrices * preconditioning * regularization * eigenvalue clustering Subject RIV: BA - General Mathematics Impact factor: 0.533, year: 2011 http://www.degruyter.com/view/j/jnma.2011.19.issue-2/jnum.2011.005/jnum.2011.005. xml

  7. Analytic stochastic regularization: gauge and supersymmetry theories

    International Nuclear Information System (INIS)

    Abdalla, M.C.B.

    1988-01-01

    Analytic stochastic regularization for gauge and supersymmetric theories is considered. Gauge invariance in spinor and scalar QCD is verified to brak fown by an explicit one loop computation of the two, theree and four point vertex function of the gluon field. As a result, non gauge invariant counterterms must be added. However, in the supersymmetric multiplets there is a cancellation rendering the counterterms gauge invariant. The calculation is considered at one loop order. (author) [pt

  8. Regularized forecasting of chaotic dynamical systems

    International Nuclear Information System (INIS)

    Bollt, Erik M.

    2017-01-01

    While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.

  9. Minimal length uncertainty relation and ultraviolet regularization

    Science.gov (United States)

    Kempf, Achim; Mangano, Gianpiero

    1997-06-01

    Studies in string theory and quantum gravity suggest the existence of a finite lower limit Δx0 to the possible resolution of distances, at the latest on the scale of the Planck length of 10-35 m. Within the framework of the Euclidean path integral we explicitly show ultraviolet regularization in field theory through this short distance structure. Both rotation and translation invariance can be preserved. An example is studied in detail.

  10. Body fat percentage of urban South African children: implications for health and fitness.

    Science.gov (United States)

    Goon, D T; Toriola, A L; Shaw, B S; Amusa, L O; Khoza, L B; Shaw, I

    2013-09-01

    To explore gender and racial profiling of percentage body fat of 1136 urban South African children attending public schools in Pretoria Central. This is a cross-sectional survey of 1136 randomly selected children (548 boys and 588 girls) aged 9-13 years in urban (Pretoria Central) South Africa. Body mass, stature, skinfolds (subscapular and triceps) were measured. Data were analysed using descriptive statistics (means and standard deviations). Differences in the mean body fat percentage were examined for boys and girls according to their age group/race, using independent t-test samples. Girls had a significantly (p = 0.001) higher percentage body fat (22.7 ± 5.7%, 95% CI = 22.3, 23.2) compared to boys (16.1 ± 7.7%, 95% CI = 15.5, 16.8). Percentage body fat fluctuated with age in both boys and girls. Additionally, girls had significantly (p = 0.001) higher percentage body fat measurements at all ages compared to boys. Viewed racially, black children (20.1 ± 7.5) were significantly (p = 0.010) fatter than white children (19.0 ± 7.4) with a mean difference of 4.0. Black children were fatter than white children at ages 9, 10, 12 and 13 years, with a significant difference (p = 0.009) observed at age 12 years. There was a considerably higher level of excessive percentage body fat among school children in Central Pretoria, South Africa, with girls having significantly higher percentage body fat compared to boys. Racially, black children were fatter than white children. The excessive percentage body fat observed among the children in this study has implications for their health and fitness. Therefore, an intervention programme must be instituted in schools to prevent and control possible excessive percentage body fat in this age group.

  11. Regularity and chaos in cavity QED

    International Nuclear Information System (INIS)

    Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G

    2017-01-01

    The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)

  12. Solution path for manifold regularized semisupervised classification.

    Science.gov (United States)

    Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H

    2012-04-01

    Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.

  13. Regularizations: different recipes for identical situations

    International Nuclear Information System (INIS)

    Gambin, E.; Lobo, C.O.; Battistel, O.A.

    2004-03-01

    We present a discussion where the choice of the regularization procedure and the routing for the internal lines momenta are put at the same level of arbitrariness in the analysis of Ward identities involving simple and well-known problems in QFT. They are the complex self-interacting scalar field and two simple models where the SVV and AVV process are pertinent. We show that, in all these problems, the conditions to symmetry relations preservation are put in terms of the same combination of divergent Feynman integrals, which are evaluated in the context of a very general calculational strategy, concerning the manipulations and calculations involving divergences. Within the adopted strategy, all the arbitrariness intrinsic to the problem are still maintained in the final results and, consequently, a perfect map can be obtained with the corresponding results of the traditional regularization techniques. We show that, when we require an universal interpretation for the arbitrariness involved, in order to get consistency with all stated physical constraints, a strong condition is imposed for regularizations which automatically eliminates the ambiguities associated to the routing of the internal lines momenta of loops. The conclusion is clean and sound: the association between ambiguities and unavoidable symmetry violations in Ward identities cannot be maintained if an unique recipe is required for identical situations in the evaluation of divergent physical amplitudes. (author)

  14. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  15. Sparsity regularization for parameter identification problems

    International Nuclear Information System (INIS)

    Jin, Bangti; Maass, Peter

    2012-01-01

    The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some

  16. HANPP Collection: Human Appropriation of Net Primary Productivity as a Percentage of Net Primary Productivity

    Data.gov (United States)

    National Aeronautics and Space Administration — The Human Appropriation of Net Primary Productivity (HANPP) as a Percentage of Net Primary Product (NPP) portion of the HANPP Collection represents a map identifying...

  17. 12 CFR Appendix A to Part 707 - Annual Percentage Yield Calculation

    Science.gov (United States)

    2010-01-01

    ... percentage yield calculations for account disclosures and advertisements, while Part II discusses annual... number of days that would occur for any actual sequence of that many calendar months. If credit unions...

  18. Weight Percentage of Calcium Carbonate for 17 Equatorial Pacific Cores from Brown University

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Weight percentages of calcium carbonate in this file were compiled by J. Farrell and W. L. Prell of Brown University for 17 equatorial Pacific Ocean sediment cores....

  19. Owners of nuclear power plants: Percentage ownership of commercial nuclear power plants by utility companies

    International Nuclear Information System (INIS)

    Wood, R.S.

    1987-08-01

    The following list indicates percentage ownership of commercial nuclear power plants by utility companies as of June 1, 1987. The list includes all plants licensed to operate, under construction, docked for NRC safety and environmental reviews, or under NRC antitrust review. It does not include those plants announced but not yet under review or those plants formally canceled. In many cases, ownership may be in the process of changing as a result of altered financial conditions, changed power needs, and other reasons. However, this list reflects only those ownership percentages of which the NRC has been formally notified. Part I lists plants alphabetically with their associated applicants/licensees and percentage ownership. Part II lists applicants/licensees alphabetically with their associated plants and percentage ownership. Part I also indicates which plants have received operating licenses (OL's). Footnotes for both parts appear at the end of this document

  20. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  1. Temporal regularity of the environment drives time perception

    OpenAIRE

    van Rijn, H; Rhodes, D; Di Luca, M

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...

  2. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  3. Effect of Gene and Physical Activity Interaction on Trunk Fat Percentage Among the Newfoundland Population

    Directory of Open Access Journals (Sweden)

    Anthony Payne

    2014-01-01

    Full Text Available Objective To explore the effect of FTO gene and physical activity interaction on trunk fat percentage. Design and Methods Subjects are 3,004 individuals from Newfoundland and Labrador whose trunk fat percentage and physical activity were recorded, and who were genotyped for 11 single-nucleotide polymorphisms (SNPs in the FTO gene. Subjects were stratified by gender. Multiple tests and multiple regressions were used to analyze the effects of physical activity, variants of FTO , age, and their interactions on trunk fat percentage. Dietary information and other environmental factors were not considered. Results Higher levels of physical activity tend to reduce trunk fat percentage in all individuals. Furthermore, in males, rs9939609 and rs1421085 were significant (α = 0.05 in explaining central body fat, but no SNPs were significant in females. For highly active males, trunk fat percentage varied significantly between variants of rs9939609 and rs1421085, but there is no significant effect among individuals with low activity. The other SNPs examined were not significant in explaining trunk fat percentage. Conclusions Homozygous male carriers of non-obesity risk alleles at rs9939609 and rs1421085 will have significant reduction in central body fat from physical activity in contrast to homozygous males of the obesity-risk alleles. The additive effect of these SNPs is found in males with high physical activity only.

  4. Convergence and fluctuations of Regularized Tyler estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim

    2015-01-01

    This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.

  5. Convergence and fluctuations of Regularized Tyler estimators

    KAUST Repository

    Kammoun, Abla

    2015-10-26

    This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.

  6. The use of regularization in inferential measurements

    International Nuclear Information System (INIS)

    Hines, J. Wesley; Gribok, Andrei V.; Attieh, Ibrahim; Uhrig, Robert E.

    1999-01-01

    Inferential sensing is the prediction of a plant variable through the use of correlated plant variables. A correct prediction of the variable can be used to monitor sensors for drift or other failures making periodic instrument calibrations unnecessary. This move from periodic to condition based maintenance can reduce costs and increase the reliability of the instrument. Having accurate, reliable measurements is important for signals that may impact safety or profitability. This paper investigates how collinearity adversely affects inferential sensing by making the results inconsistent and unrepeatable; and presents regularization as a potential solution (author) (ml)

  7. Regularization ambiguities in loop quantum gravity

    International Nuclear Information System (INIS)

    Perez, Alejandro

    2006-01-01

    One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find

  8. Effort variation regularization in sound field reproduction

    DEFF Research Database (Denmark)

    Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis

    2010-01-01

    In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...

  9. New regularities in mass spectra of hadrons

    International Nuclear Information System (INIS)

    Kajdalov, A.B.

    1989-01-01

    The properties of bosonic and baryonic Regge trajectories for hadrons composed of light quarks are considered. Experimental data agree with an existence of daughter trajectories consistent with string models. It is pointed out that the parity doubling for baryonic trajectories, observed experimentally, is not understood in the existing quark models. Mass spectrum of bosons and baryons indicates to an approximate supersymmetry in the mass region M>1 GeV. These regularities indicates to a high degree of symmetry for the dynamics in the confinement region. 8 refs.; 5 figs

  10. Total-variation regularization with bound constraints

    International Nuclear Information System (INIS)

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  11. Bayesian regularization of diffusion tensor images

    DEFF Research Database (Denmark)

    Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif

    2007-01-01

    Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...

  12. Indefinite metric and regularization of electrodynamics

    International Nuclear Information System (INIS)

    Gaudin, M.

    1984-06-01

    The invariant regularization of Pauli and Villars in quantum electrodynamics can be considered as deriving from a local and causal lagrangian theory for spin 1/2 bosons, by introducing an indefinite metric and a condition on the allowed states similar to the Lorentz condition. The consequences are the asymptotic freedom of the photon's propagator. We present a calcultion of the effective charge to the fourth order in the coupling as a function of the auxiliary masses, the theory avoiding all mass divergencies to this order [fr

  13. Strategies for regular segmented reductions on GPU

    DEFF Research Database (Denmark)

    Larsen, Rasmus Wriedt; Henriksen, Troels

    2017-01-01

    We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...

  14. Emotion regulation deficits in regular marijuana users.

    Science.gov (United States)

    Zimmermann, Kaeli; Walz, Christina; Derckx, Raissa T; Kendrick, Keith M; Weber, Bernd; Dore, Bruce; Ochsner, Kevin N; Hurlemann, René; Becker, Benjamin

    2017-08-01

    Effective regulation of negative affective states has been associated with mental health. Impaired regulation of negative affect represents a risk factor for dysfunctional coping mechanisms such as drug use and thus could contribute to the initiation and development of problematic substance use. This study investigated behavioral and neural indices of emotion regulation in regular marijuana users (n = 23) and demographically matched nonusing controls (n = 20) by means of an fMRI cognitive emotion regulation (reappraisal) paradigm. Relative to nonusing controls, marijuana users demonstrated increased neural activity in a bilateral frontal network comprising precentral, middle cingulate, and supplementary motor regions during reappraisal of negative affect (P marijuana users relative to controls. Together, the present findings could reflect an unsuccessful attempt of compensatory recruitment of additional neural resources in the context of disrupted amygdala-prefrontal interaction during volitional emotion regulation in marijuana users. As such, impaired volitional regulation of negative affect might represent a consequence of, or risk factor for, regular marijuana use. Hum Brain Mapp 38:4270-4279, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Efficient multidimensional regularization for Volterra series estimation

    Science.gov (United States)

    Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan

    2018-05-01

    This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.

  16. Supporting Regularized Logistic Regression Privately and Efficiently

    Science.gov (United States)

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  17. Supporting Regularized Logistic Regression Privately and Efficiently.

    Science.gov (United States)

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  18. Multiple graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-10-01

    Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.

  19. Accelerating Large Data Analysis By Exploiting Regularities

    Science.gov (United States)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  20. Supporting Regularized Logistic Regression Privately and Efficiently.

    Directory of Open Access Journals (Sweden)

    Wenfa Li

    Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  1. Multiview Hessian regularization for image annotation.

    Science.gov (United States)

    Liu, Weifeng; Tao, Dacheng

    2013-07-01

    The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.

  2. EIT image reconstruction with four dimensional regularization.

    Science.gov (United States)

    Dai, Tao; Soleimani, Manuchehr; Adler, Andy

    2008-09-01

    Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.

  3. Accretion onto some well-known regular black holes

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2016-01-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  4. Accretion onto some well-known regular black holes

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)

    2016-03-15

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  5. Accretion onto some well-known regular black holes

    Science.gov (United States)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  6. Studies on the Effect of Type and Solarization Period on Germination Percentage of Four Weed Species

    Directory of Open Access Journals (Sweden)

    J. Rostam

    2011-01-01

    Full Text Available Abstract In order to study the effects of soil solarization on weed control, an experiment with factorial arrangement in a randomized complete block design with four replications was conducted in a fallow farm in Daregaz in 2008. Factors included solarization duration (0, 2, 4 and 6 weeks and soil moisture content (dry and moist. Soil seed bank was sampled (in two depth, 0-10 and 10-20 cm prior to the experiment and immediately after applying treatments, and germination percentage of weed species were determined. Results of this study showed that seed germination percentage in 10 cm soil depth was influenced by soil moisture and solarization and their interactions, while in 20 cm soil depth only solarization period affected the weed seed germination. Germination percentage in moist soil was less than that in dry soil. Seed germination percentage declined more by increasing solarization duration, so that the greatest decline was obtained after 6 weeks solarization. Solarization decreased germination percentage in moist soil more than that in dry soil. Overall, the results of this experiment indicated that solarization of moist soil for 6 weeks was the most effective treatment in controlling common lambsquatres (Chenopodium album, common purslane (Portulaca oleracea, redroot pigweed (Amaranthus retroflexus, and wild mustard (Sinapis arvensis, while solarization of dry soil for 2 weeks was the least effective treatment for weed control. Keywords: Solarization, Soil moisture, Seed bank

  7. The percentage of macrophage numbers in rat model of sciatic nerve crush injury

    Directory of Open Access Journals (Sweden)

    Satrio Wicaksono

    2016-02-01

    Full Text Available ABSTRACT Excessive accumulation of macrophages in sciatic nerve fascicles inhibits regeneration of peripheral nerves. The aim of this study is to determine the percentage of the macrophages inside and outside of the fascicles at the proximal, at the site of injury and at the distal segment of rat model of sciatic nerve crush injury. Thirty male 3 months age Wistar rats of 200-230 g were divided into sham-operation group and crush injury group. Termination was performed on day 3, 7, and 14 after crush injury. Immunohistochemical examination was done using anti CD68 antibody. Counting of immunopositive and immunonegative cells was done on three representative fields for extrafascicular and intrafascicular area of proximal, injury and distal segments. The data was presented as percentage of immunopositive cells. The percentage of the macrophages was significantly increased in crush injury group compared to the sham-operated group in all segments of the peripheral nerves. While the percentage of macrophages outside fascicle in all segments of sciatic nerve and within the fascicle in the proximal segment reached its peak on day 3, the percentage of macrophages within the fascicles at the site of injury and distal segments reached the peak later at day 7. In conclusions, accumulation of macrophages outside the nerve fascicles occurs at the beginning of the injury, and then followed later by the accumulation of macrophages within nerve fascicles

  8. Quantitative Analysis of the Effect of Iterative Reconstruction Using a Phantom: Determining the Appropriate Blending Percentage

    Science.gov (United States)

    Kim, Hyun Gi; Lee, Young Han; Choi, Jin-Young; Park, Mi-Suk; Kim, Myeong-Jin; Kim, Ki Whang

    2015-01-01

    Purpose To investigate the optimal blending percentage of adaptive statistical iterative reconstruction (ASIR) in a reduced radiation dose while preserving a degree of image quality and texture that is similar to that of standard-dose computed tomography (CT). Materials and Methods The CT performance phantom was scanned with standard and dose reduction protocols including reduced mAs or kVp. Image quality parameters including noise, spatial, and low-contrast resolution, as well as image texture, were quantitatively evaluated after applying various blending percentages of ASIR. The optimal blending percentage of ASIR that preserved image quality and texture compared to standard dose CT was investigated in each radiation dose reduction protocol. Results As the percentage of ASIR increased, noise and spatial-resolution decreased, whereas low-contrast resolution increased. In the texture analysis, an increasing percentage of ASIR resulted in an increase of angular second moment, inverse difference moment, and correlation and in a decrease of contrast and entropy. The 20% and 40% dose reduction protocols with 20% and 40% ASIR blending, respectively, resulted in an optimal quality of images with preservation of the image texture. Conclusion Blending the 40% ASIR to the 40% reduced tube-current product can maximize radiation dose reduction and preserve adequate image quality and texture. PMID:25510772

  9. The menstrual cycle regularization following D-chiro-inositol treatment in PCOS women: a retrospective study.

    Science.gov (United States)

    La Marca, Antonio; Grisendi, Valentina; Dondi, Giulia; Sighinolfi, Giovanna; Cianci, Antonio

    2015-01-01

    Polycystic ovary syndrome is characterized by irregular cycles, hyperandrogenism, polycystic ovary at ultrasound and insulin resistance. The effectiveness of D-chiro-inositol (DCI) treatment in improving insulin resistance in PCOS patients has been confirmed in several reports. The objective of this study was to retrospectively analyze the effect of DCI on menstrual cycle regularity in PCOS women. This was a retrospective study of patients with irregular cycles who were treated with DCI. Of all PCOS women admitted to our centre, 47 were treated with DCI and had complete medical charts. The percentage of women reporting regular menstrual cycles significantly increased with increasing duration of DCI treatment (24% and 51.6% at a mean of 6 and 15 months of treatment, respectively). Serum AMH levels and indexes of insulin resistance significantly decreased during the treatment. Low AMH levels, high HOMA index, and the presence of oligomenorrhea at the first visit were the independent predictors of obtaining regular menstrual cycle with DCI. In conclusion, the use of DCI is associated to clinical benefits for many women affected by PCOS including the improvement in insulin resistance and menstrual cycle regularity. Responders to the treatment may be identified on the basis of menstrual irregularity and hormonal or metabolic markers.

  10. Evaluation of electron mobility in InSb quantum wells by means of percentage-impact

    International Nuclear Information System (INIS)

    Mishima, T. D.; Edirisooriya, M.; Santos, M. B.

    2014-01-01

    In order to quantitatively analyze the contribution of each scattering factor toward the total carrier mobility, we use a new convenient figure-of-merit, named a percentage impact. The mobility limit due to a scattering factor, which is widely used to summarize a scattering analysis, has its own advantage. However, a mobility limit is not quite appropriate for the above purpose. A comprehensive understanding of the difference in contribution among many scattering factors toward the total carrier mobility can be obtained by evaluating percentage impacts of scattering factors, which can be straightforwardly calculated from their mobility limits and the total mobility. Our percentage impact analysis shows that threading dislocation is one of the dominant scattering factors for the electron transport in InSb quantum wells at room temperature

  11. Laplacian embedded regression for scalable manifold regularization.

    Science.gov (United States)

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real

  12. Soil Carbon Mapping in Low Relief Areas with Combined Land Use Types and Percentages

    Science.gov (United States)

    Liu, Y. L.; Wu, Z. H.; Chen, Y. Y.; Wang, B. Z.

    2018-05-01

    Accurate mapping of soil carbon in low relief areas is of great challenge because of the defect of conventional "soil-landscape" model. Efforts have been made to integrate the land use information in the modelling and mapping of soil organic carbon (SOC), in which the spatial context was ignored. With 256 topsoil samples collected from Jianghan Plain, we aim to (i) explore the land-use dependency of SOC via one-way ANOVA; (ii) investigate the "spillover effect" of land use on SOC content; (iii) examine the feasibility of land use types and percentages (obtained with a 200-meter buffer) for soil mapping via regression Kriging (RK) models. Results showed that the SOC of paddy fields was higher than that of woodlands and irrigated lands. The land use type could explain 20.5 % variation of the SOC, and the value increased to 24.7 % when the land use percentages were considered. SOC was positively correlated with the percentage of water area and irrigation canals. Further research indicated that SOC of irrigated lands was significantly correlated with the percentage of water area and irrigation canals, while paddy fields and woodlands did not show similar trends. RK model that combined land use types and percentages outperformed the other models with the lowest values of RMSEC (5.644 g/kg) and RMSEP (6.229 g/kg), and the highest R2C (0.193) and R2P (0.197). In conclusions, land use types and percentages serve as efficient indicators for the SOC mapping in plain areas. Additionally, irrigation facilities contributed to the farmland SOC sequestration especially in irrigated lands.

  13. ABOUT FEW APPROACHES TO COMMERCIAL BANK PERCENTAGE POLICY CONSTRUCTION IN CREDITING POPULATION

    Directory of Open Access Journals (Sweden)

    A.A. Kuklin

    2007-06-01

    Full Text Available In the article we consider some aspects of Russian Federation and Sverdlovsk region bank sector development and few principles of credit organization percentage policy construction. We also describe interest rate calculation methods depending on currency toolkit and the received results of using the methods in reference to population crediting development. Besides we give some offers on increasing management efficiency of percentage policy and decreasing delayed credit debts level and some offers on specification of population crediting development forecasts in Sverdlovsk region.

  14. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  15. Constrained least squares regularization in PET

    International Nuclear Information System (INIS)

    Choudhury, K.R.; O'Sullivan, F.O.

    1996-01-01

    Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort

  16. Regularities of radiorace formation in yeasts

    International Nuclear Information System (INIS)

    Korogodin, V.I.; Bliznik, K.M.; Kapul'tsevich, Yu.G.; Petin, V.G.; Akademiya Meditsinskikh Nauk SSSR, Obninsk. Nauchno-Issledovatel'skij Inst. Meditsinskoj Radiologii)

    1977-01-01

    Two strains of diploid yeast, namely, Saccharomyces ellipsoides, Megri 139-B, isolated under natural conditions, and Saccharomyces cerevisiae 5a x 3Bα, heterozygous by genes ade 1 and ade 2, were exposed to γ-quanta of Co 60 . The content of cells-saltants forming colonies with changed morphology, that of the nonviable cells, cells that are respiration mutants, and cells-recombinants by gene ade 1 and ade 2, has been determined. A certain regularity has been revealed in the distribution among the colonies of cells of the four types mentioned above: the higher the content of cells of some one of the types, the higher that of the cells having other hereditary changes

  17. Regularization destriping of remote sensing imagery

    Science.gov (United States)

    Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle

    2017-07-01

    We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

  18. The Regularity of Optimal Irrigation Patterns

    Science.gov (United States)

    Morel, Jean-Michel; Santambrogio, Filippo

    2010-02-01

    A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.

  19. Singular tachyon kinks from regular profiles

    International Nuclear Information System (INIS)

    Copeland, E.J.; Saffin, P.M.; Steer, D.A.

    2003-01-01

    We demonstrate how Sen's singular kink solution of the Born-Infeld tachyon action can be constructed by taking the appropriate limit of initially regular profiles. It is shown that the order in which different limits are taken plays an important role in determining whether or not such a solution is obtained for a wide class of potentials. Indeed, by introducing a small parameter into the action, we are able circumvent the results of a recent paper which derived two conditions on the asymptotic tachyon potential such that the singular kink could be recovered in the large amplitude limit of periodic solutions. We show that this is explained by the non-commuting nature of two limits, and that Sen's solution is recovered if the order of the limits is chosen appropriately

  20. Two-pass greedy regular expression parsing

    DEFF Research Database (Denmark)

    Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse

    2013-01-01

    We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...... by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ... and not requiring it to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, or require substantially more log storage and employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The performance of our unoptimized C...

  1. Discriminative Elastic-Net Regularized Linear Regression.

    Science.gov (United States)

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  2. Regularization of Instantaneous Frequency Attribute Computations

    Science.gov (United States)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  3. Source rock indication from the heavy mineral weight percentages, central Tamil Nadu, India

    Digital Repository Service at National Institute of Oceanography (India)

    Rajamanickam, G.V.; Chandrasekaran, R.; Manickaraj, D.S.; Gujar, A.R.; Loveson, V.J.; Chaturvedi, S.K.; Chandrasekar, N.; Mahesh, R.

    From December 2003 to December 2005 beach sand samples have been collected at regular intervals. During this period the event of 26th December, 2004 tsunami enabled us to analyze the impact of the same in the beach sediments particularly heavy...

  4. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.; Franek, M.; Schonlieb, C.-B.

    2012-01-01

    for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations

  5. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  6. Dimensional regularization and analytical continuation at finite temperature

    International Nuclear Information System (INIS)

    Chen Xiangjun; Liu Lianshou

    1998-01-01

    The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given

  7. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded

  8. Regular Generalized Star Star closed sets in Bitopological Spaces

    OpenAIRE

    K. Kannan; D. Narasimhan; K. Chandrasekhara Rao; R. Ravikumar

    2011-01-01

    The aim of this paper is to introduce the concepts of τ1τ2-regular generalized star star closed sets , τ1τ2-regular generalized star star open sets and study their basic properties in bitopological spaces.

  9. Exclusion of children with intellectual disabilities from regular ...

    African Journals Online (AJOL)

    Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...

  10. 39 CFR 6.1 - Regular meetings, annual meeting.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  11. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    Science.gov (United States)

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  12. 5 CFR 551.421 - Regular working hours.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...

  13. 20 CFR 226.35 - Deductions from regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...

  14. 20 CFR 226.34 - Divorced spouse regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...

  15. 20 CFR 226.14 - Employee regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  16. The remaining percentage of 32P after burning of sulphur tablet containing 32P

    International Nuclear Information System (INIS)

    Ke Weiqing

    1991-01-01

    Three types of sulphur tablet containing 32 P are made artificially. The remaining percentage of 32 P after burning of three types of sulphur tablets containing 32 P is 98.1 ± 1.3% for 1st and 2nd types and 97.2 ± 2.8% for 3rd type

  17. Relation Between Bitumen Content and Percentage Air Voids in Semi Dense Bituminous Concrete

    Science.gov (United States)

    Panda, R. P.; Das, Sudhanshu Sekhar; Sahoo, P. K.

    2018-06-01

    Hot mix asphalt (HMA) is a heterogeneous mix of aggregate, mineral filler, bitumen, additives and air voids. Researchers have indicated that the durability of the HMA is sensitive on the actual bitumen content and percentage air void. This paper aims at establishing the relationship between the bitumen content and the percentage air voids in Semi Dense Bituminous Concrete (SDBC) using Viscosity Grade-30 (VG-30) bitumen. Total 54 samples have been collected, for formulation and validation of relationship and observed that the percentage air voids increases with decrease in actual bitumen content and vice versa. A minor increase in percentage air voids beyond practice of designed air voids in Marshall Method of design is required for better performance, indicating a need for reducing the codal provision of minimum bitumen content for SDBC as specified in Specification for Road & Bridges (Fourth Revision) published by Indian Road Congress, 2001. The study shows a possibility of reducing designed minimum bitumen content from codal provision for SDBC by 0.2% of weight with VG-30 grade of Bitumen.

  18. 13 CFR 120.210 - What percentage of a loan may SBA guarantee?

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false What percentage of a loan may SBA guarantee? 120.210 Section 120.210 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS... percent, except as otherwise authorized by law. [61 FR 3235, Jan. 31, 1996, as amended at 68 FR 51680, Aug...

  19. The dependence of percentage depth dose on the source-to-skin ...

    African Journals Online (AJOL)

    The variation of percentage depth dose (PDD) with source-to-skin distance (SSD) for kilovoltage X-rays used in radiotherapy has been investigated. Based on physical parameters of photon fluence, absorption and scatter during interaction of radiation with tissue, a mathematical model was developed to predict the PDDs at ...

  20. 26 CFR 1.410(b)-5 - Average benefit percentage test.

    Science.gov (United States)

    2010-04-01

    ... benefit percentages may be determined on the basis of any definition of compensation that satisfies § 1... underlying definition of compensation that satisfies section 414(s). Except as otherwise specifically... definitions of section 414(s) compensation in the determination of rates; (B) Use of different definitions of...

  1. 29 CFR 778.503 - Pseudo “percentage bonuses.”

    Science.gov (United States)

    2010-07-01

    ... such a scheme is artificially low, and the difference between the wages paid at the hourly rate and the... Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL... part, a true bonus based on a percentage of total wages—both straight time and overtime wages—satisfies...

  2. 7 CFR 929.49 - Marketable quantity, allotment percentage, and annual allotment.

    Science.gov (United States)

    2010-01-01

    ... AGRICULTURE CRANBERRIES GROWN IN STATES OF MASSACHUSETTS, RHODE ISLAND, CONNECTICUT, NEW JERSEY, WISCONSIN, MICHIGAN, MINNESOTA, OREGON, WASHINGTON, AND LONG ISLAND IN THE STATE OF NEW YORK Order Regulating Handling... history, established pursuant to § 929.48. Such allotment percentage shall be established by the Secretary...

  3. Brief Report: On the Concordance Percentages for Autistic Spectrum Disorder of Twins

    Science.gov (United States)

    Bohm, Henry V.; Stewart, Melbourne G.

    2009-01-01

    In the development of genetic theories of Autistic Spectrum Disorder (ASD) various characteristics of monozygotic (MZ) and dizygotic (DZ) twins are often considered. This paper sets forth a possible refinement in the interpretation of the MZ twin concordance percentages for ASD underlying such genetic theories, and, drawing the consequences from…

  4. Limitations of the relative standard deviation of win percentages for measuring competitive balance in sports leagues

    OpenAIRE

    P. Dorian Owen

    2009-01-01

    The relative standard deviation of win percentages, the most widely used measure of within-season competitive balance, has an upper bound which is very sensitive to variation in the numbers of teams and games played. Taking into account this upper bound provides additional insight into comparisons of competitive balance across leagues or over time.

  5. Increased percentage of Th17 cells in peritoneal fluid is associated with severity of endometriosis.

    Science.gov (United States)

    Gogacz, Marek; Winkler, Izabela; Bojarska-Junak, Agnieszka; Tabarkiewicz, Jacek; Semczuk, Andrzej; Rechberger, Tomasz; Adamiak, Aneta

    2016-09-01

    Th17 cells are a newly discovered T helper lymphocyte subpopulation, producing interleukin IL-17. Th17 cells are present in blood and peritoneal fluid (PF) at different stages of endometriosis. We aim to establish their potential importance in the pathogenesis and clinical features of the disease. The percentage of Th17 cells among T helper lymphocytes was determined in the PF and peripheral blood (PB) of patients with endometriosis and in the control group by flow cytometry using monoclonal antibodies: anti-CD-4-FITC, anti-CD-3-PE/Cy5, and anti-IL-17A-PE. Th17 percentage is increased in PF in comparison with PB in both endometriotic patients and in the control group. In severe endometriosis, the percentage of Th17 cells in PF was higher than with early (I/II stage) endometriosis. A positive correlation between the percentage of Th17 cells in PF and the white blood cell count in PB was found in patients with endometriosis. Targeting the activity of PF Th17 cells may have an influence on the proliferation of ectopic tissue and clinical manifestations of the disease. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. 78 FR 32991 - Medicaid Program; Increased Federal Medical Assistance Percentage Changes Under the Affordable...

    Science.gov (United States)

    2013-06-03

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare & Medicaid Services 42 CFR Part 433 [CMS-2327-CN] RIN 0938-AR38 Medicaid Program; Increased Federal Medical Assistance Percentage Changes Under the Affordable Care Act of 2010; Correction AGENCY: Centers for Medicare & Medicaid Services (CMS...

  7. Marathon performance in relation to body fat percentage and training indices in recreational male runners

    Directory of Open Access Journals (Sweden)

    Tanda G

    2013-05-01

    Full Text Available Giovanni Tanda,1 Beat Knechtle2,31DIME, Università degli Studi di Genova, Genova, Italy; 2Gesundheitszentrum St Gallen, St Gallen, Switzerland; 3Institute of General Practice and Health Services Research, University of Zurich, Zurich, SwitzerlandBackground: The purpose of this study was to investigate the effect of anthropometric characteristics and training indices on marathon race times in recreational male marathoners.Methods: Training and anthropometric characteristics were collected for a large cohort of recreational male runners (n = 126 participating in the Basel marathon in Switzerland between 2010 and 2011.Results: Among the parameters investigated, marathon performance time was found to be affected by mean running speed and the mean weekly distance run during the training period prior to the race and by body fat percentage. The effect of body fat percentage became significant as it exceeded a certain limiting value; for a relatively low body fat percentage, marathon performance time correlated only with training indices.Conclusion: Marathon race time may be predicted (r = 0.81 for recreational male runners by the following equation: marathon race time (minutes = 11.03 + 98.46 exp(−0.0053 mean weekly training distance [km/week] + 0.387 mean training pace (sec/km + 0.1 exp(0.23 body fat percentage [%]. The marathon race time results were valid over a range of 165–266 minutes.Keywords: endurance, exercise, anthropometry

  8. Marathon performance in relation to body fat percentage and training indices in recreational male runners.

    Science.gov (United States)

    Tanda, Giovanni; Knechtle, Beat

    2013-01-01

    The purpose of this study was to investigate the effect of anthropometric characteristics and training indices on marathon race times in recreational male marathoners. Training and anthropometric characteristics were collected for a large cohort of recreational male runners (n = 126) participating in the Basel marathon in Switzerland between 2010 and 2011. Among the parameters investigated, marathon performance time was found to be affected by mean running speed and the mean weekly distance run during the training period prior to the race and by body fat percentage. The effect of body fat percentage became significant as it exceeded a certain limiting value; for a relatively low body fat percentage, marathon performance time correlated only with training indices. Marathon race time may be predicted (r = 0.81) for recreational male runners by the following equation: marathon race time (minutes) = 11.03 + 98.46 exp(-0.0053 mean weekly training distance [km/week]) + 0.387 mean training pace (sec/km) + 0.1 exp(0.23 body fat percentage [%]). The marathon race time results were valid over a range of 165-266 minutes.

  9. The Role of Monocyte Percentage in Osteoporosis in Male Rheumatic Diseases.

    Science.gov (United States)

    Su, Yu-Jih; Chen, Chao Tung; Tsai, Nai-Wen; Huang, Chih-Cheng; Wang, Hung-Chen; Kung, Chia-Te; Lin, Wei-Che; Cheng, Ben-Chung; Su, Chih-Min; Hsiao, Sheng-Yuan; Lu, Cheng-Hsien

    2017-11-01

    Osteoporosis is easily overlooked in male patients, especially in the field of rheumatic diseases mostly prevalent with female patients, and its link to pathogenesis is still lacking. Attenuated monocyte apoptosis from a transcriptome-wide expression study illustrates the role of monocytes in osteoporosis. This study tested the hypothesis that the monocyte percentage among leukocytes could be a biomarker of osteoporosis in rheumatic diseases. Eighty-seven males with rheumatic diseases were evaluated in rheumatology outpatient clinics for bone mineral density (BMD) and surrogate markers, such as routine peripheral blood parameters and autoantibodies. From the total number of 87 patients included in this study, only 15 met the criteria for diagnosis of osteoporosis. Both age and monocyte percentage remained independently associated with the presence of osteoporosis. Steroid dose (equivalent prednisolone dose) was negatively associated with BMD of the hip area and platelet counts were negatively associated with BMD and T score of the spine area. Besides age, monocyte percentage meets the major requirements for osteoporosis in male rheumatic diseases. A higher monocyte percentage in male rheumatic disease patients, aged over 50 years in this study, and BMD study should be considered in order to reduce the risk of osteoporosis-related fractures.

  10. New loci for body fat percentage reveal link between adiposity and cardiometabolic disease risk

    DEFF Research Database (Denmark)

    Lu, Yingchang; Day, Felix R; Gustafsson, Stefan

    2016-01-01

    To increase our understanding of the genetic basis of adiposity and its links to cardiometabolic disease risk, we conducted a genome-wide association meta-analysis of body fat percentage (BF%) in up to 100,716 individuals. Twelve loci reached genome-wide significance (P<5 × 10(-8)), of which eigh...

  11. Method for quantifying percentage wood failure in block-shear specimens by a laser scanning profilometer

    Science.gov (United States)

    C. T. Scott; R. Hernandez; C. Frihart; R. Gleisner; T. Tice

    2005-01-01

    A new method for quantifying percentage wood failure of an adhesively bonded block-shear specimen has been developed. This method incorporates a laser displacement gage with an automated two-axis positioning system that functions as a highly sensitive profilometer. The failed specimen is continuously scanned across its width to obtain a surface failure profile. The...

  12. 45 CFR 305.33 - Determination of applicable percentages based on performance levels.

    Science.gov (United States)

    2010-10-01

    ..., DEPARTMENT OF HEALTH AND HUMAN SERVICES PROGRAM PERFORMANCE MEASURES, STANDARDS, FINANCIAL INCENTIVES, AND PENALTIES § 305.33 Determination of applicable percentages based on performance levels. (a) A State's... performance levels. 305.33 Section 305.33 Public Welfare Regulations Relating to Public Welfare OFFICE OF...

  13. Validation of Field Methods to Assess Body Fat Percentage in Elite Youth Soccer Players.

    Science.gov (United States)

    Munguia-Izquierdo, Diego; Suarez-Arrones, Luis; Di Salvo, Valter; Paredes-Hernandez, Victor; Alcazar, Julian; Ara, Ignacio; Kreider, Richard; Mendez-Villanueva, Alberto

    2018-05-01

    This study determined the most effective field method for quantifying body fat percentage in male elite youth soccer players and developed prediction equations based on anthropometric variables. Forty-four male elite-standard youth soccer players aged 16.3-18.0 years underwent body fat percentage assessments, including bioelectrical impedance analysis and the calculation of various skinfold-based prediction equations. Dual X-ray absorptiometry provided a criterion measure of body fat percentage. Correlation coefficients, bias, limits of agreement, and differences were used as validity measures, and regression analyses were used to develop soccer-specific prediction equations. The equations from Sarria et al. (1998) and Durnin & Rahaman (1967) reached very large correlations and the lowest biases, and they reached neither the practically worthwhile difference nor the substantial difference between methods. The new youth soccer-specific skinfold equation included a combination of triceps and supraspinale skinfolds. None of the practical methods compared in this study are adequate for estimating body fat percentage in male elite youth soccer players, except for the equations from Sarria et al. (1998) and Durnin & Rahaman (1967). The new youth soccer-specific equation calculated in this investigation is the only field method specifically developed and validated in elite male players, and it shows potentially good predictive power. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Percentage of Protected Area Amounts within each Watershed Boundary for the Conterminous US

    Science.gov (United States)

    Abstract: This dataset uses spatial information from the Watershed Boundary Dataset (WBD, March 2011) and the Protected Areas Database of the United States (PAD-US Version 1.0). The resulting data layer, with percentages of protected areas by category, was created using the ATtI...

  15. Percentage and function of CD4+CD25+ regulatory T cells in patients with hyperthyroidism

    Science.gov (United States)

    Jiang, Ting-Jun; Cao, Xue-Liang; Luan, Sha; Cui, Wan-Hui; Qiu, Si-Huang; Wang, Yi-Chao; Zhao, Chang-Jiu; Fu, Peng

    2018-01-01

    The current study observed the percentage of peripheral blood (PB) CD4+CD25+ regulatory T cells (Tregs) and the influence of CD4+CD25+ Tregs on the proliferation of naïve CD4 T cells in patients with hyperthyroidism. Furthermore, preliminary discussions are presented on the action mechanism of CD4+CD25+ Tregs on hyperthyroidism attacks. The present study identified that compared with the percentage of PB CD4+CD25+ Tregs in healthy control subjects, no significant changes were observed in the percentage of PB CD4+CD25+ Tregs in patients with hyperthyroidism (P>0.05). For patients with hyperthyroidism, CD4+CD25+ Tregs exhibited significantly reduced inhibition of the proliferation of naïve CD4 T cells and decreased secretion capacity on the cytokines of CD4 T cells, compared with those of healthy control subjects (Phyperthyroidism was significantly improved (Phyperthyroidism before treatment, no significant changes were observed in the percentage of PB CD4+CD25+ Tregs in hyperthyroidism patients following treatment (P>0.05). In the patients with hyperthyroidism, following treatment, CD4+CD25+ Tregs exhibited significantly increased inhibition of the proliferation of naïve CD4 T cells and increased secretion capacity of CD4 T cell cytokines, compared with those of the patients with hyperthyroidism prior to treatment (Phyperthyroidism, and its non-proportional decrease may be closely associated with the occurrence and progression of hyperthyroidism. PMID:29207121

  16. PERCENTAGE OF VIABLE SPERMATOZOA COLLECTED FROM THE EPIDIDYMES OF DEATH LOCAL DOG

    Directory of Open Access Journals (Sweden)

    I Nyoman Sulabda

    2012-11-01

    Full Text Available The purpose of this study to determine the effectof post mortem time on percentage of lifeepididymessperm from postmortem dog caudae epididymides. A total of 9 dog were usedand divided into three group. T0 was control group, T1, 3 hours postmortem and T2, 6hours postmortem. This way, samples were obtained at different times postmortem. Spermwere extracted from the caudae epididymes by means of cuts.The result showed that the percentage of life sperm were 67,16 ± 5.67(T0, 46.33 ± 5.60(T1 and 24.00 ± 4.35 respectively. We could appreciate that percentage of life wasaffected by postmortem time. There was significant decrease life sperm recovered fromepididymes postmortem (P<0.01. In conclusion, epididymes sperm from dog undergodecrease of percentage of life, but it could stay acceptable within many hours postmortem.We intepreted these data to indicate that it may still be possible to obtain viablespermatozoa many hours later.

  17. 39 CFR 3010.23 - Calculation of percentage change in rates.

    Science.gov (United States)

    2010-07-01

    ... DOMINANT PRODUCTS Rules for Applying the Price Cap § 3010.23 Calculation of percentage change in rates. (a... Postal Service billing determinants. The Postal Service shall make reasonable adjustments to the billing determinants to account for the effects of classification changes such as the introduction, deletion, or...

  18. 13 CFR 108.1840 - Computation of NMVC Company's Capital Impairment Percentage.

    Science.gov (United States)

    2010-01-01

    ... Capital Impairment Percentage. 108.1840 Section 108.1840 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW MARKETS VENTURE CAPITAL (âNMVCâ) PROGRAM NMVC Company's Noncompliance With Terms of Leverage Computation of Nmvc Company's Capital Impairment § 108.1840 Computation of NMVC Company's Capital Impairment...

  19. 26 CFR 1.42-8 - Election of appropriate percentage month.

    Science.gov (United States)

    2010-04-01

    ... Section 1.42-8 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY INCOME TAX INCOME TAXES Credits Against Tax § 1.42-8 Election of appropriate percentage month. (a) Election under section... previously placed in service under section 42(e). (5) Amount allocated. The housing credit dollar amount...

  20. Relation Between Bitumen Content and Percentage Air Voids in Semi Dense Bituminous Concrete

    Science.gov (United States)

    Panda, R. P.; Das, Sudhanshu Sekhar; Sahoo, P. K.

    2018-02-01

    Hot mix asphalt (HMA) is a heterogeneous mix of aggregate, mineral filler, bitumen, additives and air voids. Researchers have indicated that the durability of the HMA is sensitive on the actual bitumen content and percentage air void. This paper aims at establishing the relationship between the bitumen content and the percentage air voids in Semi Dense Bituminous Concrete (SDBC) using Viscosity Grade-30 (VG-30) bitumen. Total 54 samples have been collected, for formulation and validation of relationship and observed that the percentage air voids increases with decrease in actual bitumen content and vice versa. A minor increase in percentage air voids beyond practice of designed air voids in Marshall Method of design is required for better performance, indicating a need for reducing the codal provision of minimum bitumen content for SDBC as specified in Specification for Road & Bridges (Fourth Revision) published by Indian Road Congress, 2001. The study shows a possibility of reducing designed minimum bitumen content from codal provision for SDBC by 0.2% of weight with VG-30 grade of Bitumen.

  1. Total and Lower Extremity Lean Mass Percentage Positively Correlates With Jump Performance.

    Science.gov (United States)

    Stephenson, Mitchell L; Smith, Derek T; Heinbaugh, Erika M; Moynes, Rebecca C; Rockey, Shawn S; Thomas, Joi J; Dai, Boyi

    2015-08-01

    Strength and power have been identified as valuable components in both athletic performance and daily function. A major component of strength and power is the muscle mass, which can be assessed with dual-energy x-ray absorptiometry (DXA). The primary purpose of this study was to quantify the relationship between total body lean mass percentage (TBLM%) and lower extremity lean mass percentage (LELM%) and lower extremity force/power production during a countermovement jump (CMJ) in a general population. Researchers performed a DXA analysis on 40 younger participants aged 18-35 years, 28 middle-aged participants aged 36-55 years, and 34 older participants aged 56-75 years. Participants performed 3 CMJ on force platforms. Correlations revealed significant and strong relationships between TBLM% and LELM% compared with CMJ normalized peak vertical ground reaction force (p lean mass percentages. The findings have implications in including DXA-assessed lean mass percentage as a component for evaluating lower extremity strength and power. A paired DXA analysis and CMJ jump test may be useful for identifying neuromuscular deficits that limit performance.

  2. A low-power CMOS integrated sensor for CO2 detection in the percentage range

    NARCIS (Netherlands)

    Humbert, A.; Tuerlings, B.J.; Hoofman, R.J.O.M.; Tan, Z.; Gravesteijn, D.J.; Pertijs, M.A.P.; Bastiaansen, C.W.M.; Soccol, D.

    2013-01-01

    Within the Catrene project PASTEUR, a low-cost, low-power capacitive carbon dioxide sensor has been developed for tracking CO2 concentration in the percentage range. This paper describes this sensor, which operates at room temperature where it exhibits short response times as well as reversible

  3. Annual Percentage Rate and Annual Effective Rate: Resolving Confusion in Intermediate Accounting Textbooks

    Science.gov (United States)

    Vicknair, David; Wright, Jeffrey

    2015-01-01

    Evidence of confusion in intermediate accounting textbooks regarding the annual percentage rate (APR) and annual effective rate (AER) is presented. The APR and AER are briefly discussed in the context of a note payable and correct formulas for computing each is provided. Representative examples of the types of confusion that we found is presented…

  4. The percentage of bacterial genes on leading versus lagging strands is influenced by multiple balancing forces

    Science.gov (United States)

    Mao, Xizeng; Zhang, Han; Yin, Yanbin; Xu, Ying

    2012-01-01

    The majority of bacterial genes are located on the leading strand, and the percentage of such genes has a large variation across different bacteria. Although some explanations have been proposed, these are at most partial explanations as they cover only small percentages of the genes and do not even consider the ones biased toward the lagging strand. We have carried out a computational study on 725 bacterial genomes, aiming to elucidate other factors that may have influenced the strand location of genes in a bacterium. Our analyses suggest that (i) genes of some functional categories such as ribosome have higher preferences to be on the leading strands; (ii) genes of some functional categories such as transcription factor have higher preferences on the lagging strands; (iii) there is a balancing force that tends to keep genes from all moving to the leading and more efficient strand and (iv) the percentage of leading-strand genes in an bacterium can be accurately explained based on the numbers of genes in the functional categories outlined in (i) and (ii), genome size and gene density, indicating that these numbers implicitly contain the information about the percentage of genes on the leading versus lagging strand in a genome. PMID:22735706

  5. Use of enzymes in diets with different percentages of added fat for broilers

    Directory of Open Access Journals (Sweden)

    F.G.P. Costa

    2013-06-01

    Full Text Available We assessed the extent to which the removal of fat source, and consequently its compounds, such as linoleic acid, can affect the performance of broilers. We used 600 male Cobb 500 day old chicks. The birds were distributed in a completely randomized experimental design, with five treatments and six replicates of 20 birds each. The treatments were: (T1 diet - positive control (PC, which met the nutritional needs; (T2 diet - negative control (CN, a reduction of 100kcal/kg and low linoleic acid content; (T3: diet - negative control reformulated for low linoleic acid content and a set of Quantum phytase XT and Econase XT 25 (BAL + QFit-Eco, (T4: diet - negative control reformulated, with the percentage of linoleic acid adjusted to an intermediate value between the value of the diet and diet CP and CN to use a set of Quantum phytase XT and XT Econase 25 (IAL + QFit-Eco and (T5: diet - negative control reformulated, with the percentage of linoleic acid adjusted to a value similar to that of the positive control diet and joint use of Quantum phytase XT and XT Econase 25 (AAL + QFit-Eco. The joint use of Quantum Phytase and Econase promoted improvement in the performance of broilers from 1 to 21 days. The greatest weight gain was obtained with diets containing percentages of total fat and linoleic acids. Dietary supplementation with enzymes resulted in higher levels of calcium in the tibia, whatever the percentage of linoleic studied.

  6. Correlation of Leukocyte Count and Percentage of Segmented Neutrophils with Pathohistological Findings of Appendix in Children

    Directory of Open Access Journals (Sweden)

    Marko Baskovic

    2018-01-01

    Full Text Available BackgroundAppendicitis is the most common indication for an emergency operation in children's age. Although none of the laboratory values has not high sensitivity and specificity for the diagnosis of appendicitis, leukocyte count and the percentage of segmented neutrophils are most commonly used. The aim of this study was to determine whether there is a statistically significant correlation between leukocyte count and the percentage of segmented neutrophils compared to the pathohistological finding of appendix in children. Materials and MethodsWe retrospectively analyzed the data in the period from 1 January 2016 to 31 December 2016. The analysis was made on 211 patients. Spearman's correlation coefficient (rs was calculated. We determined the specificity and sensitivity of leukocyte count and the percentage of segmented neutrophils used in the calculation of Alvorado and Pediatric Appendicitis score.ResultsThe results of the research have shown that the correlation between leukocyte count and the pathohistological findings is weak (rs = 0.29, p = 3.61*10-8, while there is no correlation between the percentage of segmented neutrophils and pathohistological findings (rs = 0.18, p = 7.08 *10-5. The sensitivity of leukocyte count is 93% and the specificity is 30%, while the sensitivity to the percentage of segmented neutrophils is 71% and the specificity is 50%. ROC analysis for leukocytes shows area under the curve of 0.648, while for segmented neutrophils of 0.574.ConclusionGiven the correlation results obtained, the clinical experience of physicians will still have one of the leading roles in diagnosing acute appendicitis in children.

  7. [Relationship between hypertension and percentage of body fat, in children of Anhui province].

    Science.gov (United States)

    Tao, R W; Wan, Y H; Zhang, H; Wang, Y F; Wang, B; Xu, L; Zuo, A Z; Tong, S L; Tao, F B

    2016-02-01

    To study the situation of hypertension among children in Anhui province and to analyze its association with the percentage of body fat. A total of 8 890 aged 7-17 years old children, were tested for blood pressure and thickness of skin fold in Anhui province. Hypertension in children was diagnosed referring to the 2010 Chinese guidelines for the management of hypertension (revised in 2010). The percentage of body fat was calculated according to the thickness of skin fold and specific formulas. METHODS used for statistics analysis would include t test, χ(2) test while logistic regression was used to analyze the relationship between percentage of body fat in children and adolescents. In total, 8 890 subjects aged 7-17 years were recruited, in Anhui province. The prevalence of hypertension in Children aged 7-17 was 13.6%, with the total number of hypertension as 1 210, in Anhui province. There were significant differences in the prevalence of hypertension among urban and rural children. Both prevalence of hypertension among boys and girls in urban area appeared higher than those in the rural area. In the urban areas, the prevalence rates of hypertension in both boys and girls were higher than the ones in rural area (χ(2) values were 36.36, 7.79, 42.10 and 13.77, respectively, and Pfat between boys and girls from the rural or urban areas were both significant.OR values of the boys and girls of group P(40)- and P(60)-were 1.65 and 1.75, respectively. Risks of hypertension in boys and girls showed significantly increase by P(40) and P(60), respectively. The prevalence of hypertension in both boys and girls increased along with the increase of the percentage on body fat. Since higher percentage of body fat could increase the risk of hypertension, reduction of body fat content would be beneficial to the success of prevention and control of hypertension in children.

  8. Percentage tumor necrosis following chemotherapy in neuroblastoma correlates with MYCN status but not survival.

    Science.gov (United States)

    Bomken, Simon; Davies, Beverley; Chong, Leeai; Cole, Michael; Wood, Katrina M; McDermott, Michael; Tweddle, Deborah A

    2011-03-01

    The percentage of chemotherapy-induced necrosis in primary tumors corresponds with outcome in several childhood malignancies, including high-risk metastatic diseases. In this retrospective pilot study, the authors assessed the importance of postchemotherapy necrosis in high-risk neuroblastoma with a histological and case notes review of surgically resected specimens. The authors reviewed all available histology of 31 high-risk neuroblastoma cases treated with COJEC (dose intensive etoposide and vincristine with either cyclophosphamide, cisplatin or carboplatin) or OPEC/OJEC (etoposide, vincristine and cyclophosphamide with alternating cisplatin [OPEC] or carboplatin [OJEC]) induction chemotherapy in 2 Children's Cancer & Leukaemia Group (CCLG) pediatric oncology centers. The percentage of postchemotherapy necrosis was assessed and compared with MYCN amplification status and overall survival. The median percentage of postchemotherapy tumor necrosis was 60%. MYCN status was available for 28 cases, of which 12 were amplified (43%). Survival in cases with ≥ 60% necrosis or ≥ 90% necrosis was not better than those with less necrosis, nor was percentage necrosis associated with survival using Cox regression. However, MYCN-amplified tumors showed a higher percentage of necrosis than non-MYCN-amplified tumors, 71.3% versus 37.2% (P = .006). This effect was not related to prechemotherapy necrosis and did not confer improved overall survival. Postchemotherapy tumor necrosis is higher in patients with MYCN amplification. In this study, postchemotherapy necrosis did not correlate with overall survival and should not lead to modification of postoperative treatment. However, these findings need to be confirmed in a larger prospective study of children with high-risk neuroblastoma.

  9. Android and gynoid fat percentages and serum lipid levels in United States adults.

    Science.gov (United States)

    Min, Kyoung-Bok; Min, Jin-Young

    2015-03-01

    Accumulating evidence suggests that fat distribution is a better predictor of cardiovascular disease than body mass index (BMI). The aim of this study was to investigate the association of android and gynoid fat percentages with lipid profiles to determine whether android and/or gynoid fat percentages are associated with serum lipid levels. A population-based cross-sectional study. Five thousand six hundred and ninety-six adults (20 years and older) who participated in the National Health and Nutrition Examination Survey 2003-2006. The regional body composition in the android and gynoid regions was defined by dual energy X-ray absorptiometry (DXA). The estimation of lipid risk profiles included total cholesterol, high-density lipoprotein (HDL) -cholesterol, low-density lipoprotein (LDL) -cholesterol and triglycerides (TG). Regardless of gender, android and gynoid body fat percentages were positively and significantly correlated with BMI and waist circumference. After adjustment for age, ethnicity, education, smoking, alcohol consumption, dyslipidaemia and BMI, increases in android fat percentage were significantly associated with total cholesterol, TG and HDL cholesterol in males, and total cholesterol, HDL cholesterol and LDL cholesterol in females. The gynoid fat percentages showed a positive correlation with total cholesterol in males, whereas gynoid fat accumulation in females showed a favourable association with TG and HDL cholesterol. The observed associations differed according to ethnic groups. Our results suggest that regional fat distribution in the android and gynoid regions have different effects on lipid profiles, and that fat in the android region, rather than the gynoid region, may be an important factor in determining the risk of cardiovascular disease. © 2014 John Wiley & Sons Ltd.

  10. Accreting fluids onto regular black holes via Hamiltonian approach

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)

    2017-08-15

    We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)

  11. On the regularized fermionic projector of the vacuum

    Science.gov (United States)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  12. On the regularized fermionic projector of the vacuum

    International Nuclear Information System (INIS)

    Finster, Felix

    2008-01-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed

  13. MRI reconstruction with joint global regularization and transform learning.

    Science.gov (United States)

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. External perforated Solar Screens for daylighting in residential desert buildings: Identification of minimum perforation percentages

    KAUST Repository

    Sherif, Ahmed

    2012-06-01

    The desert climate is endowed by clear sky conditions, providing an excellent opportunity for optimum utilization of natural light in daylighting building indoor spaces. However, the sunny conditions of the desert skies, in countries like Egypt and Saudi Arabia, result in the admittance of direct solar radiation, which leads to thermal discomfort and the incidence of undesired glare. One type of shading systems that is used to permit daylight while controlling solar penetration is " Solar Screens" Very little research work addressed different design aspects of external Solar Screens and their influence on daylighting performance, especially in desert conditions, although these screens proved their effectiveness in controlling solar radiation in traditional buildings throughout history.This paper reports on the outcomes of an investigation that studied the influence of perforation percentage of Solar Screens on daylighting performance in a typical residential living room of a building in a desert location. The objective was to identify minimum perforation percentage of screen openings that provides adequate illuminance levels in design-specific cases and all-year-round.Research work was divided into three stages. Stage one focused on the analysis of daylighting illuminance levels in specific dates and times, while the second stage was built on the results of the first stage, and addressed year round performance using Dynamic Daylight Performance Metrics (DDPMs). The third stage addressed the possibility of incidence of glare in specific cases where illuminance levels where found very high in some specific points during the analysis of first stage. The research examined the daylighting performance in an indoor space with a number of assumed fixed experimentation parameters that were chosen to represent the principal features of a typical residential living room located in a desert environment setting.Stage one experiments demonstrated that the screens fulfilled the

  15. Manifold Regularized Experimental Design for Active Learning.

    Science.gov (United States)

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  16. Regularization of the Coulomb scattering problem

    International Nuclear Information System (INIS)

    Baryshevskii, V.G.; Feranchuk, I.D.; Kats, P.B.

    2004-01-01

    The exact solution of the Schroedinger equation for the Coulomb potential is used within the scope of both stationary and time-dependent scattering theories in order to find the parameters which determine the regularization of the Rutherford cross section when the scattering angle tends to zero but the distance r from the center remains finite. The angular distribution of the particles scattered in the Coulomb field is studied on rather a large but finite distance r from the center. It is shown that the standard asymptotic representation of the wave functions is inapplicable in the case when small scattering angles are considered. The unitary property of the scattering matrix is analyzed and the 'optical' theorem for this case is discussed. The total and transport cross sections for scattering the particle by the Coulomb center proved to be finite values and are calculated in the analytical form. It is shown that the effects under consideration can be important for the observed characteristics of the transport processes in semiconductors which are determined by the electron and hole scattering by the field of charged impurity centers

  17. Color correction optimization with hue regularization

    Science.gov (United States)

    Zhang, Heng; Liu, Huaping; Quan, Shuxue

    2011-01-01

    Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.

  18. Wave dynamics of regular and chaotic rays

    International Nuclear Information System (INIS)

    McDonald, S.W.

    1983-09-01

    In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space

  19. Regularities and irregularities in order flow data

    Science.gov (United States)

    Theissen, Martin; Krause, Sebastian M.; Guhr, Thomas

    2017-11-01

    We identify and analyze statistical regularities and irregularities in the recent order flow of different NASDAQ stocks, focusing on the positions where orders are placed in the order book. This includes limit orders being placed outside of the spread, inside the spread and (effective) market orders. Based on the pairwise comparison of the order flow of different stocks, we perform a clustering of stocks into groups with similar behavior. This is useful to assess systemic aspects of stock price dynamics. We find that limit order placement inside the spread is strongly determined by the dynamics of the spread size. Most orders, however, arrive outside of the spread. While for some stocks order placement on or next to the quotes is dominating, deeper price levels are more important for other stocks. As market orders are usually adjusted to the quote volume, the impact of market orders depends on the order book structure, which we find to be quite diverse among the analyzed stocks as a result of the way limit order placement takes place.

  20. Library search with regular reflectance IR spectra

    International Nuclear Information System (INIS)

    Staat, H.; Korte, E.H.; Lampen, P.

    1989-01-01

    Characterisation in situ for coatings and other surface layers is generally favourable, but a prerequisite for precious items such as art objects. In infrared spectroscopy only reflection techniques are applicable here. However for attenuated total reflection (ATR) it is difficult to obtain the necessary optical contact of the crystal with the sample, when the latter is not perfectly plane or flexible. The measurement of diffuse reflectance demands a scattering sample and usually the reflectance is very poor. Therefore in most cases one is left with regular reflectance. Such spectra consist of dispersion-like feature instead of bands impeding their interpretation in the way the analyst is used to. Furthermore for computer search in common spectral libraries compiled from transmittance or absorbance spectra a transformation of the reflectance spectra is needed. The correct conversion is based on the Kramers-Kronig transformation. This somewhat time - consuming procedure can be speeded up by using appropriate approximations. A coarser conversion may be obtained from the first derivative of the reflectance spectrum which resembles the second derivative of a transmittance spectrum. The resulting distorted spectra can still be used successfully for the search in peak table libraries. Experiences with both transformations are presented. (author)

  1. Regularities of praseodymium oxide dissolution in acids

    International Nuclear Information System (INIS)

    Savin, V.D.; Elyutin, A.V.; Mikhajlova, N.P.; Eremenko, Z.V.; Opolchenova, N.L.

    1989-01-01

    The regularities of Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 interaction with inorganic acids are studied. pH of the solution and oxidation-reduction potential registrated at 20±1 deg C are the working parameters of studies. It is found that the amount of all oxides dissolved increase in the series of acids - nitric, hydrochloric and sulfuric, in this case for hydrochloric and sulfuric acid it increases in the series of oxides Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 . It is noted that Pr 2 O 5 has a high value of oxidation-reduction potential with a positive sign in the whole disslolving range. A low positive value of a redox potential during dissolving belongs to Pr(OH) 3 and in the case of Pr 2 O 3 dissloving redox potential is negative. The schemes of dissolving processes which do not agree with classical assumptions are presented

  2. Regular expressions compiler and some applications

    International Nuclear Information System (INIS)

    Saldana A, H.

    1978-01-01

    We deal with high level programming language of a Regular Expressions Compiler (REC). The first chapter is an introduction in which the history of the REC development and the problems related to its numerous applicatons are described. The syntactic and sematic rules as well as the language features are discussed just after the introduction. Concerning the applicatons as examples, an adaptation is given in order to solve numerical problems and another for the data manipulation. The last chapter is an exposition of ideas and techniques about the compiler construction. Examples of the adaptation to numerical problems show the applications to education, vector analysis, quantum mechanics, physics, mathematics and other sciences. The rudiments of an operating system for a minicomputer are the examples of the adaptation to symbolic data manipulaton. REC is a programming language that could be applied to solve problems in almost any human activity. Handling of computer graphics, control equipment, research on languages, microprocessors and general research are some of the fields in which this programming language can be applied and developed. (author)

  3. Sparsity-regularized HMAX for visual recognition.

    Directory of Open Access Journals (Sweden)

    Xiaolin Hu

    Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.

  4. Quantum implications of a scale invariant regularization

    Science.gov (United States)

    Ghilencea, D. M.

    2018-04-01

    We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).

  5. Regularities development of entrepreneurial structures in regions

    Directory of Open Access Journals (Sweden)

    Julia Semenovna Pinkovetskaya

    2012-12-01

    Full Text Available Consider regularities and tendencies for the three types of entrepreneurial structures — small enterprises, medium enterprises and individual entrepreneurs. The aim of the research was to confirm the possibilities of describing indicators of aggregate entrepreneurial structures with the use of normal law distribution functions. Presented proposed by the author the methodological approach and results of construction of the functions of the density distribution for the main indicators for the various objects: the Russian Federation, regions, as well as aggregates ofentrepreneurial structures, specialized in certain forms ofeconomic activity. All the developed functions, as shown by the logical and statistical analysis, are of high quality and well-approximate the original data. In general, the proposed methodological approach is versatile and can be used in further studies of aggregates of entrepreneurial structures. The received results can be applied in solving a wide range of problems justify the need for personnel and financial resources at the federal, regional and municipal levels, as well as the formation of plans and forecasts of development entrepreneurship and improvement of this sector of the economy.

  6. Epithelium percentage estimation facilitates epithelial quantitative protein measurement in tissue specimens.

    Science.gov (United States)

    Chen, Jing; Toghi Eshghi, Shadi; Bova, George Steven; Li, Qing Kay; Li, Xingde; Zhang, Hui

    2013-12-01

    The rapid advancement of high-throughput tools for quantitative measurement of proteins has demonstrated the potential for the identification of proteins associated with cancer. However, the quantitative results on cancer tissue specimens are usually confounded by tissue heterogeneity, e.g. regions with cancer usually have significantly higher epithelium content yet lower stromal content. It is therefore necessary to develop a tool to facilitate the interpretation of the results of protein measurements in tissue specimens. Epithelial cell adhesion molecule (EpCAM) and cathepsin L (CTSL) are two epithelial proteins whose expressions in normal and tumorous prostate tissues were confirmed by measuring staining intensity with immunohistochemical staining (IHC). The expressions of these proteins were measured by ELISA in protein extracts from OCT embedded frozen prostate tissues. To eliminate the influence of tissue heterogeneity on epithelial protein quantification measured by ELISA, a color-based segmentation method was developed in-house for estimation of epithelium content using H&E histology slides from the same prostate tissues and the estimated epithelium percentage was used to normalize the ELISA results. The epithelium contents of the same slides were also estimated by a pathologist and used to normalize the ELISA results. The computer based results were compared with the pathologist's reading. We found that both EpCAM and CTSL levels, measured by ELISA assays itself, were greatly affected by epithelium content in the tissue specimens. Without adjusting for epithelium percentage, both EpCAM and CTSL levels appeared significantly higher in tumor tissues than normal tissues with a p value less than 0.001. However, after normalization by the epithelium percentage, ELISA measurements of both EpCAM and CTSL were in agreement with IHC staining results, showing a significant increase only in EpCAM with no difference in CTSL expression in cancer tissues. These results

  7. Analysis the percentage frequency to estimate Volleyball´s performance

    Directory of Open Access Journals (Sweden)

    Santiago Calero-Morales

    2012-12-01

    Full Text Available The study analyzes the percentage frequency as a mathematical model used to estimate the international volleyball performance. Based on the description of its practical objectives, the paper shows some positive and negative characteristics of the equation, characteristics that affect decision making by the coach. Three studies were conducted involving a population of 42 games in volleyball, men's youth category, 10 game of 13 possible belonging to the Final round of 2006 FIVB World League, and a ranked players in the qualifying phase of the 2006 World League with 48 games as population. The investigation determined that the percentage frequency is a simple computer model that defines a value perfectly isolated from reality, but does not model correctly all variables that significantly influence the final performance, creating false interpretations of reality.

  8. Dietary lecithin improves dressing percentage and decreases chewiness in the longissimus muscle in finisher gilts.

    Science.gov (United States)

    Akit, H; Collins, C L; Fahri, F T; Hung, A T; D'Souza, D N; Leury, B J; Dunshea, F R

    2014-03-01

    The influence of dietary lecithin at doses of 0, 4, 20 or 80 g/kg fed to finisher gilts for six weeks prior to slaughter on growth performance, carcass quality and pork quality was investigated. M. longissimus lumborum (loin) was removed from 36 pig carcasses at 24h post-mortem for Warner-Bratzler shear force, compression, collagen content and colour analyses. Dietary lecithin increased dressing percentage (P=0.009). Pork chewiness and collagen content were decreased by dietary lecithin (Plecithin had no effect on shear force, cohesiveness or hardness (P>0.05, respectively). Dietary lecithin reduced loin muscle L* values and increased a* values (Plecithin improved dressing percentage and resulted in less chewy and less pale pork. © 2013.

  9. Effect of embryo culture media on percentage of males at birth.

    Science.gov (United States)

    Zhu, Jinliang; Zhuang, Xinjie; Chen, Lixue; Liu, Ping; Qiao, Jie

    2015-05-01

    Does embryo culture medium influence the percentage of males at birth? The percentage of males delivered after ICSI cycles using G5™ medium was statistically significantly higher than after cycles where Global, G5™ PLUS, and Quinn's Advantage Media were used. Male and female embryos have different physiologies during preimplantation development. Manipulating the energy substrate and adding growth factors have a differential impact on the development of male and female embryos. This was a retrospective analysis of the percentage of males at birth, and included 4411 singletons born from fresh embryo transfer cycles between January 2011 and August 2013 at the Center for Reproductive Medicine of Third Hospital Peking University. Only singleton gestations were included. Participants were excluded if preimplantation genetic diagnosis, donor oocytes and donor sperm were used. The database between January 2011 and August 2013 was searched with unique medical record number, all patients were present in the database with only one cycle. Demographics, cycle characteristics and the percentage of male babies in the four culture media groups were compared with analysis of variance or χ(2) tests. Multivariable logistic regression was done to determine the association between the sex at birth and culture media after adjusting for other confounding factors, including parental age, parental BMI, type of infertility, parity, number of embryos transferred, number of early gestational sacs, cycles with testicular sperm aspiration (TESA)/percutaneous epididymal sperm aspiration (PESA)/testicular sperm extraction (TESE), number of oocytes retrieved, cycles with blastocyst transfers, and gestational age within ICSI group. Within the IVF group, the percentage of males at birth for G5™, Global, Quinn's and G5™ PLUS media were comparable (P > 0.05); however, within the ICSI group, the percentage of male babies in cycles using G5™(56.1%) was statistically significantly higher than

  10. 20 CFR 225.42 - Notice of the percentage amount of a cost-of-living increase.

    Science.gov (United States)

    2010-04-01

    ... THE RAILROAD RETIREMENT ACT PRIMARY INSURANCE AMOUNT DETERMINATIONS Cost-of-Living Increases § 225.42 Notice of the percentage amount of a cost-of-living increase. The percentage amount of the cost-of-living... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Notice of the percentage amount of a cost-of...

  11. The application of the percentage change calculation in the context of inflation in Mathematical Literacy

    OpenAIRE

    Sarah Bansilal

    2017-01-01

    The school subject Mathematical Literacy requires application of mathematics procedures in various contextual settings, but not much is known about the ways in which students engage with contextual settings such as inflation. This qualitative study was conducted with in-service Mathematical Literacy teachers in South Africa with the purpose of exploring the extent to which the teachers recognised the contextual constraints involved in applying the percentage change calculation to the inflatio...

  12. The application of the percentage change calculation in the context of inflation in Mathematical Literacy

    Directory of Open Access Journals (Sweden)

    Sarah Bansilal

    2017-07-01

    Full Text Available The school subject Mathematical Literacy requires application of mathematics procedures in various contextual settings, but not much is known about the ways in which students engage with contextual settings such as inflation. This qualitative study was conducted with in-service Mathematical Literacy teachers in South Africa with the purpose of exploring the extent to which the teachers recognised the contextual constraints involved in applying the percentage change calculation to the inflation context. The written responses of the 406 Mathematical Literacy teachers were scrutinised to identify their interpretations of the contextual constraints involved in applying the percentage change procedure to the context of inflation. The item required the application of two successive percentage change operations (corresponding to the inflation rates for the 2 years. Of the 406 responses that were analysed, 260 (65% were unable to take account of all the contextual constraints. There were 108 teachers who reduced the procedure to a one-step calculation while 64 teachers interpreted the context as a percentage decrease scenario. A large number of teachers (162 struggled with the interpretation of the role of the year, k, in the relationship between the quantities. The findings indicate that engagement with and understanding of the concept of inflation is dependent on a synthesis of the contextual constraints into the mathematical procedures. This article provides some insights into the struggles with making sense of the contextual nature of inflation which is an area that has received little attention in mathematics education studies. The teachers’ struggles likely mirror learners’ struggles and hence the research applies in a similar way to learners.

  13. Infants with Down syndrome: percentage and age for acquisition of gross motor skills.

    Science.gov (United States)

    Pereira, Karina; Basso, Renata Pedrolongo; Lindquist, Ana Raquel Rodrigues; da Silva, Louise Gracelli Pereira; Tudella, Eloisa

    2013-03-01

    The literature is bereft of information about the age at which infants with Down syndrome (DS) acquire motor skills and the percentage of infants that do so by the age of 12 months. Therefore, it is necessary to identify the difference in age, in relation to typical infants, at which motor skills were acquired and the percentage of infants with DS that acquire them in the first year of life. Infants with DS (N=20) and typical infants (N=25), both aged between 3 and 12 months, were evaluated monthly using the AIMS. In the prone position, a difference of up to 3 months was found for the acquisition of the 3rd to 16th skill. There was a difference in the percentage of infants with DS who acquired the 10th to 21st skill (from 71% to 7%). In the supine position, a difference of up to one month was found from the 3rd to 7th skill; however, 100% were able to perform these skills. In the sitting position, a difference of 1-4 months was found from the 1st to 12th skill, ranging from 69% to 29% from the 9th to 12th. In the upright position, the difference was 2-3 months from the 3rd to 8th skill. Only 13% acquired the 8th skill and no other skill was acquired up to the age of 12 months. The more complex the skills the greater the difference in age between typical infants and those with DS and the lower the percentage of DS individuals who performed the skills in the prone, sitting and upright positions. None of the DS infants were able to stand without support. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Profile of Cardiorespiratory Fitness, Flexibility and Fat Percentage of Junior High School Students in Jatinangor

    Directory of Open Access Journals (Sweden)

    Gemuruh Putra Akbar

    2017-06-01

    Full Text Available Background: Obesity and physical inactivity is a major risk factor for non-communicable disease and global mortality in adolescent. Lack of physical activity will lead the condition into poor physical fitness, measured by cardio respiratory fitness (maximum oxygen volume, VO2 max, and other components such as flexibility. The study aimed to describe VO2 max, flexibility and fat percentage among junior high school students in Jatinangor. Methods: The study was a descriptive observational study using descriptive analysis.  VO2 max was assessed using Astrand Ryhming step test, flexibility was measured using flexometer sit and reach test, and fat percentage was determined using bioelectrical impedance analysis scale. The subjects were junior high school students who were study in 1st, 2nd and 3rd grade in Jatinangor Junior High School based on stratification sampling method. It was conducted from September–October 2013. Results: Total subjects were 110 students consisted of male (n=52 and female (n=58.  The VO2 max were in good and above category, 57.69% of male (50.37 ± 9.80 ml/kg/min, and 60.34% of female (37.66±7.03 ml/kg/min. The flexibility for both males and females were within excellent category (67.31%, 26.56 ±7.14 cm and 67.24%, 27.29±6.64 cm respectively. The fat percentage in females were within healthy category (67.24%, 25.28 ± 6.85 %, meanwhile male were within underfat category (48.08%, 11.66 ± 5.83 %. Conclusions: The majority of VO2 max, and flexibility both in male and female were good. The fat percentages were good in female students, while in male students were under normal range. DOI: 10.15850/amj.v4n2.1085

  15. Effects of Aloe vera on dressing percentage and haemato-biochemidal parameters of broiler chickens

    Directory of Open Access Journals (Sweden)

    Jagmohan Singh

    2013-09-01

    Full Text Available Aim: To evaluate the effects of Aloe vera on dressing percentage and hemato-biochemical parameters of broiler chickens.Materials and Methods: A total of 90 chicks were used in this study. They were randomly allocated into 3 treatment groups.Fresh Aloe vera leaf juice (ALJ was prepared and administered to the test group T3 at the rate of 20 g/Lin drinking water daily.This study was carried out for 42 days. Dressing percentage and hemato-biochemical parameters were recorded at the end ofexperiment.Results: Group that was given Aloe vera (T3 showed numerically higher dressing percentage as compared to control group(T1 and drug control group (T2. It also showed significantly (P0.05 differences were observed in other parameters among all the treatment groups.Conclusion: Aloe vera has potential to be a growth promoter in broiler chicks and its growth promoting effects are comparableto that of antibiotic growth promoter (AGP.

  16. The Effects of Water Parameters on Monthly Seagrass Percentage Cover in Lawas, East Malaysia

    Science.gov (United States)

    Ahmad-Kamil, E. I.; Ramli, R.; Jaaman, S. A.; Bali, J.; Al-Obaidi, J. R.

    2013-01-01

    Seagrass is a valuable marine ecosystem engineer. However, seagrass population is declining worldwide. The lack of seagrass research in Malaysia raises questions about the status of seagrasses in the country. The seagrasses in Lawas, which is part of the coral-mangrove-seagrass complex, have never been studied in detail. In this study, we examine whether monthly changes of seagrass population in Lawas occurred. Data on estimates of seagrass percentage cover and water physicochemical parameters (pH, turbidity, salinity, temperature, and dissolved oxygen) were measured at 84 sampling stations established within the study area from June 2009 to May 2010. Meteorological data such as total rainfall, air temperature, and Southern Oscillation Index were also investigated. Our results showed that (i) the monthly changes of seagrass percentage cover are significant, (ii) the changes correlated significantly with turbidity measurements, and (iii) weather changes affected the seagrass populations. Our study indicates seagrass percentage increased during the El-Nino period. These results suggest that natural disturbances such as weather changes affect seagrass populations. Evaluation of land usage and measurements of other water physicochemical parameters (such as heavy metal, pesticides, and nutrients) should be considered to assess the health of seagrass ecosystem at the study area. PMID:24163635

  17. Lack of association of ghrelin precursor gene variants and percentage body fat or serum lipid profiles.

    Science.gov (United States)

    Martin, Glynn R; Loredo, J C; Sun, Guang

    2008-04-01

    Ghrelin has been recognized for its involvement in food intake, control of energy homeostasis, and lipid metabolism. However, the roles of genetic variations in the ghrelin precursor gene (GHRL) on body compositions and serum lipids are not clear in humans. Our study investigated five single-nucleotide polymorphisms (SNPs) within GHRL to determine their relationship with body fat percentage (BF), trunk fat percentage (TF), lower body (legs) fat percentage (LF), and serum lipids in 1,464 subjects, which were recruited from the genetically homogeneous population of Newfoundland and Labrador (NL), Canada. Serum glucose, insulin, total cholesterol, high-density lipoprotein-cholesterol, low-density lipoprotein-cholesterol, and triglycerides were determined. Five SNPs are rs35684 (A/G: a transition substitution in exon 1), rs4684677 (A/T: a missense mutation), rs2075356 (C/T: intron), rs26802 (G/T: intron), and rs26311 (A/G: near the 3' untranslated region) of GHRL were genotyped using TaqMan validated or functionally tested SNP genotyping assays. Our study found no significant evidence of an allele or genotype association between any of the variant sites and body compositions or serum lipids. Furthermore, haplotype frequencies were not found to be significantly different between lean and obese subjects. In summary, the results of our study do not support a significant role for genetic variations in GHRL in the differences of body fat and serum lipid profiles in the NL population.

  18. Effect of percentage of low plastic fines on the unsaturated shear strength of compacted gravel soil

    Directory of Open Access Journals (Sweden)

    Kamal Mohamed Hafez Ismail Ibrahim

    2015-06-01

    Full Text Available Low plastic fines in gravel soils affect its unsaturated shear strength due to the contribution of matric suction that arises in micro and macro pores found within and between aggregates. The shear strength of five different types of prepared gravel soils is measured and is compared with a theoretical model (Fredlund et al., 1978 to predict the unsaturated shear strength. The results are consistent to a great extent except the case of dry clayey gravel soil. It is also found that on inundation of gravel soils containing plastic fines greater than 12% a considerable reduction in both the strength and the stiffness modulus is noticed. This 12% percentage is close to the accepted 15% percentage of fines given by ASTM D4318 (American society for testing material. The angle of internal friction that arises due to matric suction decreases with the increase of degree of saturation of soil. The hysteresis of some tested gravel soils is measured and found that it increases by increasing the percentage of fines.

  19. Influence of the recycled material percentage on the rheological behaviour of HDPE for injection moulding process.

    Science.gov (United States)

    Javierre, C; Clavería, I; Ponz, L; Aísa, J; Fernández, A

    2007-01-01

    The amount of polymer material wasted during thermoplastic injection moulding is very high. It comes from both the feed system of the part, and parts necessary to set up the mould, as well as the scrap generated along the process due to quality problems. The residues are managed through polymer recycling that allows reuse of the materials in the manufacturing injection process. Recycling mills convert the parts into small pieces that are used as feed material for injection, by mixing the recycled feedstock in different percentages with raw material. This mixture of both raw and recycled material modifies material properties according to the percentage of recycled material introduced. Some of the properties affected by this modification are those related to rheologic behaviour, which strongly conditions the future injection moulding process. This paper analyzes the rheologic behaviour of material with different percentages of recycled material by means of a capillary rheometer, and evaluates the influence of the corresponding viscosity curves obtained on the injection moulding process, where small variations of parameters related to rheological behaviour, such as pressure or clamping force, can be critical to the viability and cost of the parts manufactured by injection moulding.

  20. TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY

    International Nuclear Information System (INIS)

    Crotts, Arlin P. S.

    2009-01-01

    Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: ∼50% of reports originate from near Aristarchus, ∼16% from Plato, ∼6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that ∼80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.

  1. Elementary Particle Spectroscopy in Regular Solid Rewrite

    International Nuclear Information System (INIS)

    Trell, Erik

    2008-01-01

    The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it ''is the likely keystone of a fundamental computational foundation'' also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)xO(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each

  2. Regularization of plurisubharmonic functions with a net of good points

    OpenAIRE

    Li, Long

    2017-01-01

    The purpose of this article is to present a new regularization technique of quasi-plurisubharmoinc functions on a compact Kaehler manifold. The idea is to regularize the function on local coordinate balls first, and then glue each piece together. Therefore, all the higher order terms in the complex Hessian of this regularization vanish at the center of each coordinate ball, and all the centers build a delta-net of the manifold eventually.

  3. Dimensional regularization in position space and a forest formula for regularized Epstein-Glaser renormalization

    International Nuclear Information System (INIS)

    Keller, Kai Johannes

    2010-04-01

    The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)

  4. Dimensional regularization in position space and a forest formula for regularized Epstein-Glaser renormalization

    Energy Technology Data Exchange (ETDEWEB)

    Keller, Kai Johannes

    2010-04-15

    The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)

  5. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  6. Estimated percentage of typhoid fever in adult pakistani population (TAP) study

    International Nuclear Information System (INIS)

    Mehboob, F.; Arshad, A.; Firdous, S.; Ahmed, S.; Rehma, S.

    2013-01-01

    Typhoid fever is a serious infection with high morbidity and mortality in untreated cases. It is one of the very common infections in developing countries due to various factors involving hygiene and sanitation. Objective: To determine the estimated percentage of typhoid fever in Pakistani population and to find the commonly prescribed antibiotics for the disease. Material and Methods: This cross sectional study was conducted on 1036 patients, selected from forty five general practitioner clinics, between June to October 2010. Patients of > 18 years of age with > 3 days history of fever (> 100 degree F) and high index of suspicion for typhoid fever were tested for typhoid fever using Typhidot kits and positive cases were recruited for monitoring response to treatment. The febrile patients with clear cut history of urinary or respiratory infect-ion, hypovolemic shock or hepatobiliary disease were excluded and not tested by typhidot kit. The antibiotics prescribed to study population by various general practitioners were noted. Data was analysed on SPSS. Results were expressed in percentages and proportions. Results: Total 1036 patients were recruited. Typhoidot test was negative in 63.9% and positive in 36.1% patients with highest percentages of positive cases in Karachi, Rawalpindi and Hyderabad. The maximum number of cases were reported in summer season especially from June to August. Most of the patients were between ages of 19 - 39 years. The commonest anti-biotics prescribed were Ofloxacin, Ciprofloxacin and Levofloxacin. Conclusion: Typhoid fever is very common infection in Pakistan caused by Salmonella typhi which is transmitted among humans through faecooral route. Disease can be controlled not only by antibiotics like fluoroquinolones but by patient education, improvement in hygiene and sanitation, safe supply of clean drinking water and prophylactic vaccination as well. However, timely diagnosis and appropriate management with proper antibiotics is the key

  7. Estimated percentage of typhoid fever in adult pakistani population (TAP) study

    Energy Technology Data Exchange (ETDEWEB)

    Mehboob, F.; Arshad, A.; Firdous, S.; Ahmed, S.; Rehma, S. [Mayo Hospital, Lahore (Pakistan). Dept. of Medicine

    2013-01-15

    Typhoid fever is a serious infection with high morbidity and mortality in untreated cases. It is one of the very common infections in developing countries due to various factors involving hygiene and sanitation. Objective: To determine the estimated percentage of typhoid fever in Pakistani population and to find the commonly prescribed antibiotics for the disease. Material and Methods: This cross sectional study was conducted on 1036 patients, selected from forty five general practitioner clinics, between June to October 2010. Patients of > 18 years of age with > 3 days history of fever (> 100 degree F) and high index of suspicion for typhoid fever were tested for typhoid fever using Typhidot kits and positive cases were recruited for monitoring response to treatment. The febrile patients with clear cut history of urinary or respiratory infect-ion, hypovolemic shock or hepatobiliary disease were excluded and not tested by typhidot kit. The antibiotics prescribed to study population by various general practitioners were noted. Data was analysed on SPSS. Results were expressed in percentages and proportions. Results: Total 1036 patients were recruited. Typhoidot test was negative in 63.9% and positive in 36.1% patients with highest percentages of positive cases in Karachi, Rawalpindi and Hyderabad. The maximum number of cases were reported in summer season especially from June to August. Most of the patients were between ages of 19 - 39 years. The commonest anti-biotics prescribed were Ofloxacin, Ciprofloxacin and Levofloxacin. Conclusion: Typhoid fever is very common infection in Pakistan caused by Salmonella typhi which is transmitted among humans through faecooral route. Disease can be controlled not only by antibiotics like fluoroquinolones but by patient education, improvement in hygiene and sanitation, safe supply of clean drinking water and prophylactic vaccination as well. However, timely diagnosis and appropriate management with proper antibiotics is the key

  8. Investigating the influence of infill percentage on the mechanical properties of fused deposition modelled ABS parts

    Directory of Open Access Journals (Sweden)

    Kenny Álvarez

    2016-09-01

    Full Text Available 3D printing is a manufacturing process that is usually used for modeling and prototyping. One of the most popular printing techniques is fused deposition modeling (FDM, which is based on adding melted material layer by layer. Although FDM has several advantages with respect to other manufacturing materials, there are several problems that have to be faced. When setting the printing options, several parameters have to be taken into account, such as temperature, speed, infill percentage, etc. Selecting these parameters is often a great challenge for the user, and is generally solved by experience without considering the influence of variations in the parameters on the mechanical properties of the printed parts.This article analyzes the influence of the infill percentage on the mechanical properties of ABS (Acrylonitrile Butadiene Styrene printed parts. In order to characterize this influence, test specimens for tensile strength and Charpy tests were printed with a Makerbot Replicator 2X printer, in which the infill percentage was varied but the rest of the printing parameters were kept constant. Three different results were analyzed for these tests: tensile strength, impact resistance, and effective printing time. Results showed that the maximum tensile force (1438N and tensile stress (34,57MPa were obtained by using 100% infill. The maximum impact resistance, 1,55J, was also obtained with 100% infill. In terms of effective printing time, results showed that printing with an infill range between 50% and 98% is not recommended, since the effective printing time is higher than with a 100% infill and the tensile strength and impact resistance are smaller. In addition, in comparing the results of our analysis with results from other authors, it can be concluded that the printer type and plastic roll significantly influence the mechanical properties of ABS parts.

  9. RPE vs. Percentage 1RM Loading in Periodized Programs Matched for Sets and Repetitions

    Science.gov (United States)

    Helms, Eric R.; Byrnes, Ryan K.; Cooke, Daniel M.; Haischer, Michael H.; Carzoli, Joseph P.; Johnson, Trevor K.; Cross, Matthew R.; Cronin, John B.; Storey, Adam G.; Zourdos, Michael C.

    2018-01-01

    Purpose: To investigate differences between rating of perceived exertion (RPE) and percentage one-repetition maximum (1RM) load assignment in resistance-trained males (19–35 years) performing protocols with matched sets and repetitions differentiated by load-assignment. Methods: Participants performed squats then bench press 3x/weeks in a daily undulating format over 8-weeks. Participants were counterbalanced by pre-test 1RM then assigned to percentage 1RM (1RMG, n = 11); load-assignment via percentage 1RMs, or RPE groups (RPEG, n = 10); participant-selected loads to reach target RPE ranges. Ultrasonography determined pre and post-test pectoralis (PMT), and vastus lateralis muscle thickness at 50 (VLMT50) and 70% (VLMT70) femur-length. Results: Bench press (1RMG +9.64 ± 5.36; RPEG + 10.70 ± 3.30 kg), squat (1RMG + 13.91 ± 5.89; RPEG + 17.05 ± 5.44 kg) and their combined-total 1RMs (1RMG + 23.55 ± 10.38; RPEG + 27.75 ± 7.94 kg) increased (p 0.05). Magnitude-based inferences revealed 79, 57, and 72% chances of mean small effect size (ES) advantages for squat; ES 90% confidence limits (CL) = 0.50 ± 0.63, bench press; ES 90% CL = 0.28 ± 0.73, and combined-total; ES 90% CL = 0.48 ± 0.68 respectively, in RPEG. There were 4, 14, and 6% chances 1RMG had a strength advantage of the same magnitude, and 18, 29, and 22% chances, respectively of trivial differences between groups. Conclusions: Both loading-types are effective. However, RPE-based loading may provide a small 1RM strength advantage in a majority of individuals. PMID:29628895

  10. VARIOUS FACTORS AFFECTING DRESSING PERCENTAGE OF COMMERCIALLY CULTURED CYPRINID FISH IN CARP FISH PONDS IN SERBIA

    Directory of Open Access Journals (Sweden)

    Todor Marković

    2012-12-01

    Full Text Available The aim of this study was to determine the yield carcass of all categories of cyprinid fish reared in ponds in Serbia. Samples of two and three-year old carp, two-year old silver carp and grass carp were taken in the winter from a pond where the production is organized in a semi­intensive system. The three­year old carp was sampled from two ponds. In one case, it fed on barley, maize and wheat in the following proportions 40:30:30, while in the second case it fed on complete diet mixtures. Also, the samples of two­year old carp were taken from ponds where they fed on complete feed mixture. Dressing percentage was the most favourable in common carp (67%, followed by silver carp (62%, and it was the least in grass carp (60% (p<0.01. The best yield (66% was obtained in two-year old carp, followed by one-year old carp (64%, and the worst yield was determined in three-year old carp (58% (p<0.01. Carcasses yield was better in two-year old carp fed on pelleted feed (68% than in carp of the same age fed on grains (66% (p<0.01. The values of dressing percentage measured in three-year old carp reared in the semi-intensive system was 56%, and 59% in three-year old carp fed on pelleted complete feed mixture. Fish species, age, system of husbandry and diet showed a significant effect on carcasses yield. The highest dressing percentage and weight of fillets was noted in two-year old carp fed on complete feed, and it was a result of lower weight of internal organs and associated fat. The obtained results may be helpful in creating the best strategy for the selection of raw fish for fish manufacturing.

  11. Optimal Adaptive Statistical Iterative Reconstruction Percentage in Dual-energy Monochromatic CT Portal Venography.

    Science.gov (United States)

    Zhao, Liqin; Winklhofer, Sebastian; Yang, Zhenghan; Wang, Keyang; He, Wen

    2016-03-01

    The aim of this article was to study the influence of different adaptive statistical iterative reconstruction (ASIR) percentages on the image quality of dual-energy computed tomography (DECT) portal venography in portal hypertension patients. DECT scans of 40 patients with cirrhosis (mean age, 56 years) at the portal venous phase were retrospectively analyzed. Monochromatic images at 60 and 70 keV were reconstructed with four ASIR percentages: 0%, 30%, 50%, and 70%. Computed tomography (CT) numbers of the portal veins (PVs), liver parenchyma, and subcutaneous fat tissue in the abdomen were measured. The standard deviation from the region of interest of the liver parenchyma was interpreted as the objective image noise (IN). The contrast-noise ratio (CNR) between PV and liver parenchyma was calculated. The diagnostic acceptability (DA) and sharpness of PV margins were obtained using a 5-point score. The IN, CNR, DA, and sharpness of PV were compared among the eight groups with different keV + ASIR level combinations. The IN, CNR, DA, and sharpness of PV of different keV + ASIR groups were all statistically different (P ASIR and 70 keV + 0% ASIR (filtered back-projection [FBP]) combination, respectively, whereas the largest and smallest objective IN were obtained in the 60 keV + 0% ASIR (FBP) and 70 keV + 70% combination. The highest DA and sharpness values of PV were obtained at 50% ASIR for 60 keV. An optimal ASIR percentage (50%) combined with an appropriate monochromatic energy level (60 keV) provides the highest DA in portal venography imaging, whereas for the higher monochromatic energy (70 keV) images, 30% ASIR provides the highest image quality, with less IN than 60 keV with 50% ASIR. Copyright © 2015 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  12. Effect of protease and duration of fattening period on dressing percentage of broiler chickens

    OpenAIRE

    Dosković Vladimir; Bogosavljević-Bošković Snežana; Perić Lidija; Lukić Miloš; Škrbić Zdenka; Rakonjac Simeon; Petričević Veselin

    2016-01-01

    This study evaluates the effect of different crude protein levels in broiler diets supplemented with 0.2% and 0.3% protease enzyme (Ronozyme Pro Act) on dressed carcass weight and dressing percentage during two fattening periods (49 and 63 days). The fast-growing strain Cobb 500 was used. At the end of the fattening trial i.e. at 49 and 63 days, 10 male and 10 female birds were randomly sacrificed from each experimental group to determine body weights and c...

  13. Effect of dietary protease supplementation and sex on dressing percentage and body conformation in broilers

    OpenAIRE

    Dosković Vladimir; Bogosavljević-Bošković Snežana; Lukić Miloš; Škrbić Zdenka; Rakonjac Simeon; Petričević Veselin

    2016-01-01

    This paper presents research results on the effect of protease on the dressing percentage of conventionally dressed carcass and body conformation in broiler chickens. Broiler diet was supplemented with 0.2% protease (group E-I) and 0.3% protease (group E-II), and protein content in the feed was reduced by 4% (E-I) and 6% (E-II) through a decrease in soybean meal content. Fast-growing Cobb 500 broilers were used for a 63-day fattening trial. Body conformatio...

  14. Association Between Hospitals Caring for a Disproportionately High Percentage of Minority Trauma Patients and Increased Mortality

    Science.gov (United States)

    Haider, Adil H.; Ong’uti, Sharon; Efron, David T.; Oyetunji, Tolulope A.; Crandall, Marie L.; Scott, Valerie K.; Haut, Elliott R.; Schneider, Eric B.; Powe, Neil R.; Cooper, Lisa A.; Cornwell, Edward E.

    2012-01-01

    Objective To determine whether there is an increased odds of mortality among trauma patients treated at hospitals with higher proportions of minority patients (ie, black and Hispanic patients combined). Design Hospitals were categorized on the basis of the percentage of minority patients admitted with trauma. The adjusted odds of in-hospital mortality were compared between hospitals with less than 25% of patients who were minorities (the reference group) and hospitals with 25% to 50% of patients who were minorities and hospitals with more than 50% of patients who were minorities. Multivariate logistic regression (with generalized linear modeling and a cluster-correlated robust estimate of variance) was used to control for multiple patient and injury severity characteristics. Setting A total of 434 hospitals in the National Trauma Data Bank. Participants Patients aged 18 to 64 years whose medical records were included in the National Trauma Data Bank for the years 2007 and 2008 with an Injury Severity Score of 9 or greater and who were white, black, or Hispanic. Main Outcome Measures Crude mortality and adjusted odds of in-hospital mortality. Results A total of 311 568 patients were examined. Hospitals in which the percentage of minority patients was more than 50% also had younger patients, fewer female patients, more patients with penetrating trauma, and the highest crude mortality. After adjustment for potential confounders, patients treated at hospitals in which the percentage of minority patients was 25% to 50% and at hospitals in which the percentage of minority patients was more than 50% demonstrated increased odds of death (adjusted odds ratio, 1.16 [95% confidence interval, 1.01–1.34] and adjusted odds ratio, 1.37 [95% confidence interval, 1.16–1.61], respectively), compared with the reference group. This disparity increased further on subset analysis of patients with a blunt injury. Uninsured patients had significantly increased odds of mortality within

  15. Metabolic syndrome: Differences for Asian Americans is in their percentage of body fat

    Directory of Open Access Journals (Sweden)

    Patricia Alpert

    2016-09-01

    Full Text Available Asian Americans are not frequently thought of as being obese or overweight yet some of the Asian American subgroups have a disproportionate risk for cardiovascular disease and type 2 diabetes mellitus. Although the standardized body mass index (BMI assessment is an adequate tool for reporting secular prevalence trends for overweight/obesity across populations, it falls short in accuracy when assessing Asian Americans. In recent years more has been written about the re-evaluation of BMI cut points for normal weight, overweight, or obese Asian Americans. Additionally, the waist circumference norm was modified to indicate a smaller waist size is a risk for metabolic syndrome. The purpose of this paper is to provide an overview of the research literature on BMI and percentage of body fat as it relates to health risk for metabolic syndrome for Asian American subgroups. Three databases were used to identify articles for this review: Google Scholar, CINHAL, and PubMed. Seven hundred twenty-six articles were initially identified as meeting the criteria; 690 articles were eliminated after a review of the article titles revealed the content did not meet the focus of this review. Of the remaining articles, 19 were eliminated after a review of the abstracts indicated they were meta-analyses, review articles, or case studies. The remaining 18 articles were included in this review. Three common themes emerged. (1 The differences in BMI and body fat percentage are evident between Asian Americans and other ethnic groups. (2 Differences in the percentage of body fat exist between Asian American subgroups, and between Asian Americans and Asian immigrants. (3 There are differences in disease development end points when comparing Asian American subgroups and Asian immigrant subgroups. There are differences in body fat distribution and body fat percentages as well as BMI compared to other ethnic groups for metabolic syndrome. There are also differences between Asian

  16. [Prognosis and percentage of employment after the surgery in Marfan syndrome].

    Science.gov (United States)

    Adachi, H; Kawahito, K; Yamaguchi, A; Murata, S; Ino, T

    2002-07-01

    The percentage of employment in the Marfan patient after the Bentall procedure was studied. Eighteen of 20 patients (90%) returned to their daily life and are working well after the surgery. Seven patients (35%) needed the second operation due to the enlargement of false lumen during the follow-up period. Fatal cardiovascular accidents occurred in 7 their families (35%) in our series. Careful follow-up, adequate selection of medical and surgical treatment including second operation, medical examination of their families are important to keep the good quality of life in the Marfan patient.

  17. Regular Breakfast and Blood Lead Levels among Preschool Children

    Directory of Open Access Journals (Sweden)

    Needleman Herbert

    2011-04-01

    Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.

  18. Effect of feed texture on growth performance, dressing percentage and organ weight of broilers

    International Nuclear Information System (INIS)

    Mahmood, S.; Altaf, H.; Hassan, M.M.U.

    2013-01-01

    Comparative efficacy of two important forms of feed, mash and crumbles, fed alone or in combination (mash-crumbs), was studied on growth performance, dressing percentage and organ weight of broilers. One hundred twenty broiler chicks were used in the present study and were fed mash, crumbles and mash-crumbs feed for 0-6 weeks of age. Four treatments, designated as A, B, C and D were used in this experiment. Chicks in group A were kept on mash feeding serving as control and those in group B were offered crumbles. Group C was fed mash from day old to two weeks and crumbles from three to six weeks of age while group D was offered mash from day old to four weeks and then crumbles were fed for next two weeks of age. The results of the experiment showed that different forms of feed exhibited significant (P<0.05) effect on overall weight gain and feed conversion ratio (FCR) of the broilers. Whereas, feed consumption, dressing percentage and relative weights of liver, heart, gizzard, spleen, pancreas, intestine and abdominal fat pad of the birds remained unaffected due to different forms of feed. The broilers maintained upon crumbles throughout the experimental period, fetched maximum profit than other treatment groups. (author)

  19. Change in donor profile influenced the percentage of organs transplanted from multiple organ donors.

    Science.gov (United States)

    Meers, C; Van Raemdonck, D; Van Gelder, F; Van Hees, D; Desschans, B; De Roey, J; Vanhaecke, J; Pirenne, J

    2009-03-01

    We hypothesized that the change in donor profile over the years influenced the percentage of transplantations. We reviewed medical records for all multiple-organ donors (MODs) within our network. The percentage of transplanted organs was compared between 1991-1992 (A) and 2006-2007 (B). In period A, 156 potential MODs were identified compared with 278 in period B. Fifteen potential donors (10%) in period A and 114 (41%) in period B were rejected because they were medically not suitable (40% vs 75%) or there was no family consent (60% vs 25%). Of the remaining effective MODs (141 in period A and 164 in period B), mean (standard deviation = SD) age was 34 (5) years vs 49 (17) years (P organs transplanted in periods A vs B was kidneys, 97% vs 79%; livers, 64% vs 85%; hearts, 60% vs 26%; lungs, 7% vs 35%; and pancreas, 6% vs 13% (P organs (17%), mainly because of medical contraindications. The MOD profile changed to older age, fewer traumatic brain deaths, and longer ventilation time. We transplanted more livers, lungs, and pancreases but fewer kidneys and hearts.

  20. Hydraulic conductivity in response to exchangeable sodium percentage and solution salt concentration

    Directory of Open Access Journals (Sweden)

    Jefferson Luiz de Aguiar Paes

    2014-10-01

    Full Text Available Hydraulic conductivity is determined in laboratory assays to estimate the flow of water in saturated soils. However, the results of this analysis, when using distilled or deionized water, may not correspond to field conditions in soils with high concentrations of soluble salts. This study therefore set out to determine the hydraulic conductivity in laboratory conditions using solutions of different electrical conductivities in six soils representative of the State of Pernambuco, with the exchangeable sodium percentage adjusted in the range of 5-30%. The results showed an increase in hydraulic conductivity with both decreasing exchangeable sodium percentage and increasing electrical conductivity in the solution. The response to the treatments was more pronounced in soils with higher proportion of more active clays. Determination of hydraulic conductivity in laboratory is routinely performed with deionized or distilled water. However, in salt affected soils, these determinations should be carried out using solutions of electrical conductivity different from 0 dS m-1, with values close to those determined in the saturation extracts.

  1. Lower percentage of CD8+ T cells in peripheral blood of patients with sporotrichosis.

    Science.gov (United States)

    Zhu, Mingji; Xu, Yaqin; An, Lin; Jiang, Jinlan; Zhang, Xu; Jiang, Rihua

    2016-07-01

    To characterize the peripheral immunity and immunity response of patients with sporotrichosis, in this study we determined the lymphocyte subsets in the peripheral blood of Chinese patients with sporotrichosis. In this retrospective study, peripheral blood was collected from 69 sporotrichosis patients (37, fixed cutaneous form; 32 lymphocutaneous) and 66 healthy controls. Lymphocyte subsets were analyzed using flow cytometry. Compared to controls, the percentage of CD8+ T cells was lower in sporotrichosis patients. The percentage of CD8+ T cells in peripheral blood tended to become lower with disease duration and disease severity, although the difference was not statistically significant for either acute, subacute and chronic patients or fixed cutaneous and lymphocutaneous patients. Our data indicate that the decrease of CD8+ T cells in peripheral blood of patients with sporotrichosis is associated with disease severity, although the difference was not statistically significant for either duration or clinical forms of the disease. Combining antifungal agents and immunomodulators in patients with long disease duration and lymphocutaneous may be more beneficial than antifungal monotherapy. Copyright © 2016. Published by Elsevier Inc.

  2. Comparison of methodologies in determining bone marrow fat percentage under different environmental conditions.

    Science.gov (United States)

    Murden, David; Hunnam, Jaimie; De Groef, Bert; Rawlin, Grant; McCowan, Christina

    2017-01-01

    The use of bone marrow fat percentage has been recommended in assessing body condition at the time of death in wild and domestic ruminants, but few studies have looked at the effects of time and exposure on animal bone marrow. We investigated the utility of bone marrow fat extraction as a tool for establishing antemortem body condition in postmortem specimens from sheep and cattle, particularly after exposure to high heat, and compared different techniques of fat extraction for this purpose. Femora were collected from healthy and "skinny" sheep and cattle. The bones were either frozen or subjected to 40°C heat; heated bones were either wrapped in plastic to minimize desiccation or were left unwrapped. Marrow fat percentage was determined at different time intervals by oven-drying, or by solvent extraction using hexane in manual equipment or a Soxhlet apparatus. Extraction was performed, where possible, on both wet and dried tissue. Multiple samples were tested from each bone. Bone marrow fat analysis using a manual, hexane-based extraction technique was found to be a moderately sensitive method of assessing antemortem body condition of cattle up to 6 d after death. Multiple replicates should be analyzed where possible. Samples from "skinny" sheep showed a different response to heat from those of "healthy" sheep; "skinny" samples were so reduced in quantity by day 6 (the first sampling day) that no individual testing could be performed. Further work is required to understand the response of sheep marrow.

  3. Ombuds’ corner: Is the number of cases involving women related to their percentage in an organization?

    CERN Multimedia

    Vincent Vuillemin

    2013-01-01

    Over the past two years, the Ombuds has seen double the number of cases involving women staff members compared to those involving men, relative to their populations. Two questions can thus be asked: is that a general phenomenon also seen in other organizations? Or is it related to the under-representation of women, namely is this a common situation in organizations with fewer women than men? If so, the Ombuds should notice different statistics in organizations where the number of women and men is comparable.   To answer these questions, several annual reports from international organizations have been analysed. The names of these organizations are kept confidential, as the reports are not public. Relation between the percentage of cases involving women and their percentage in an Organization. The circled data point is CERN’s. The results can be seen in this graph (right), limited solely by the number of Organizations for which such data is available. Note that if the ...

  4. Mathematical model for body fat percentage of children with cerebral palsy

    Directory of Open Access Journals (Sweden)

    Eduardo Borba Neves

    Full Text Available Abstract Introduction The aim of this study was to develop a specific mathematical model to estimate the body fat percentage (BF% of children with cerebral palsy, based on a Brazilian population of patients with this condition. Method This is a descriptive cross-sectional study. The study included 63 Caucasian children with cerebral palsy, both males and females, aged between three and ten-years-old. Participants were assessed for functional motor impairment using the Gross Motor Function Classification System (GMFCS, dual energy x-ray absorptiometry (DXA and skinfold thickness. Total body mass (TBM and skinfolds thickness from: triceps (Tr, biceps (Bi, Suprailiac (Si, medium thigh (Th, abdominal (Ab, medial calf (Ca and subscapular (Se were collected. Fat mass (FM was estimated by dual energy x-ray absorptiometry (gold standard. Results The model was built from multivariate linear regression; FM was set as a dependent variable and other anthropometric variables, age and sex, were set as independent variables. The final model was established as F%=((0.433xTBM + 0.063xTh + 0.167xSi - 6.768 ÷ TBM × 100, the R2 value was 0.950, R2adjusted=0.948 and the standard error of estimate was 1.039 kg. Conclusion This method was shown to be valid to estimate body fat percentage of children with cerebral palsy. Also, the measurement of skinfolds on both sides of the body showed good results in this modelling.

  5. Influence of serum percentage on the behavior of Wharton's jelly mesenchymal stem cells in culture.

    Science.gov (United States)

    Harmouch, C; El-Omar, R; Labrude, P; Decot, V; Menu, P; Kerdjoudj, H

    2013-01-01

    Mesenchymal stem cells (MSCs) are multipotent cells able to differentiate into several lineages with valuable applications in regenerative medicine. MSCs differentiation is highly dependent on physicochemical properties of the culture substrate, cell density and on culture medium composition. In this study, we assessed the influence of fetal bovine serum (FBS) level on Wharton's jelly (WJ)-MSCs behavior seeded on polyelectrolyte multilayer films (PEMF) made of four bilayers of poly-allylamine hydrochloride (PAH) as polycation and poly-styrene sulfonate (PSS) as polyanion. MSCs isolated from WJ by explants method were amplified until the third passage. Their phenotypic characterization was performed by flow cytometry analyses. MSCs were seeded on PEMF, in Endothelial growth medium-2 (EGM-2) supplemented by either 5% or 2% FBS. Cell's behavior was monitored for 20 days by optical microscopy and immunofluorescence. Until 2 weeks on glass slides, no difference was observed whatever the FBS percentage. Then with 5% FBS, MSCs formed three-dimensional spheroids on PSS/PAH after 20 days of culture with a nuclear aggregate. Whereas, with 2% FBS, these spheroids did not appear and cells grown in 2D conserved the fibroblast-like morphology. The decrease of FBS percentage from 5% to 2% avoids 3D cell spheroids formation on PAH/PSS. Such results could guide bioengineering towards building 2D structures like cell layers or 3D structures by increasing the osteogenic or chondrogenic differentiation potential of MSCs.

  6. Effect on physical properties of laterite soil with difference percentage of sodium bentonite

    Science.gov (United States)

    Kasim, Nur Aisyah; Azmi, Nor Azizah Che; Mukri, Mazidah; Noor, Siti Nur Aishah Mohd

    2017-08-01

    This research was carried out in an attempt to know the physical properties of laterite soil with the appearance of difference percentage of sodium bentonite. Lateritic soils usually develop in tropical and other regions with similar hot and humid climate, where heavy rainfall, warm temperature and well drainage lead to the formation of thick horizons of reddish lateritic soil profiles rich in iron and aluminium. When sodium predominates, a large amount of water can be absorbed in the interlayer, resulting in the remarkable swelling properties observed with hydrating sodium bentonite. There are some basic physical properties test conducted in this research which are Specific Gravity Test, pH Test, Sieve Analysis, Hydrometer Test, Shrinkage Limit and Atterberg Limit. The test will be conducted with 0%, 5%, 10%, 15% and 20% of sodium bentonite. Each test will be repeated three times for the accuracy of the result. From the physical properties test the soil properties characteristic react with the sodium bentonite can be determine. Therefore the best percentage of sodium bentonite admixture can be determined for laterite soil. The outcomes of this study give positive results due to the potential of sodium bentonite to improve the laterite soil particle.

  7. Carcass percentage and quality of broilers given a ration containing probiotics and prebiotics

    Directory of Open Access Journals (Sweden)

    Muhammad Daud

    2007-10-01

    Full Text Available Probiotics is a feed additive in the form of life microorganisms that balance microorganism population in the digestive tract. While prebiotics is a feed substance which is not digested, and selectively improves growth and activity of useful microbes in large intestine. The objectives of this research were to study the carcass percentage and carcass quality of broilers given a ration containing probiotics and prebiotics. Four hundred eighty day-old chicks of broiler Arbor Acres strain were divided into four dietary treatments and three replications (40 birds / replicate. Ration used was consisted of: R1 = basal ration + 0.01% antibiotics (Zinc bacitracin, R2 = basal ration + 0.2% probiotics (Bacillus spp, R3 = basal ration + 0.2% probiotics + 0.5% prebiotics and R4 = basal ration + 0.5% prebiotics (katuk leaves. The variables observed were: carcass percentage, fat content in the abdomen, liver and carcass, cholesterol content in the liver, carcass and blood serum. The results showed that the carcass quality of broiler received probiotics and prebiotics either independently or the combination was significantly (P<0.05 different. The fat content of liver and thigh, and breast cholesterol of R3 was significantly (P<0.05 lower than that of the control (R1. It is concluded that combination of probiotics and prebiotics were able to decrease the carcass fat and cholesterol content at six week of age.

  8. The Effect of Regular-Season Rest on Playoff Performance Among Players in the National Basketball Association.

    Science.gov (United States)

    Belk, John W; Marshall, Hayden A; McCarty, Eric C; Kraeutler, Matthew J

    2017-10-01

    There has been speculation that rest during the regular season for players in the National Basketball Association (NBA) improves player performance in the postseason. To determine whether there is a correlation between the amount of regular-season rest among NBA players and playoff performance and injury risk in the same season. Cohort study; Level of evidence, 3. The Basketball Reference and Pro Sports Transactions archives were searched from the 2005 to 2015 seasons. Data were collected on players who missed fewer than 5 regular-season games because of rest (group A) and 5 to 9 regular-season games because of rest (group B) during each season. Inclusion criteria consisted of players who played a minimum of 20 minutes per game and made the playoffs that season. Players were excluded if they missed ≥10 games because of rest or suspension or missed ≥20 games in a season for any reason. Matched pairs were formed between the groups based on the following criteria: position, mean age at the start of the season within 2 years, regular-season minutes per game within 5 minutes, same playoff seeding, and player efficiency rating (PER) within 2 points. The following data from the playoffs were collected and compared between matched pairs at each position (point guard, shooting guard, forward/center): points per game, assists per game, PER, true shooting percentage, blocks, steals, and number of playoff games missed because of injury. A total of 811 players met the inclusion and exclusion criteria (group A: n = 744 players; group B: n = 67 players). Among all eligible players, 27 matched pairs were formed. Within these matched pairs, players in group B missed significantly more regular-season games because of rest than players in group A (6.0 games vs 1.3 games, respectively; P NBA regular season does not improve playoff performance or affect the injury risk during the playoffs in the same season.

  9. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    Science.gov (United States)

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  10. Tur\\'an type inequalities for regular Coulomb wave functions

    OpenAIRE

    Baricz, Árpád

    2015-01-01

    Tur\\'an, Mitrinovi\\'c-Adamovi\\'c and Wilker type inequalities are deduced for regular Coulomb wave functions. The proofs are based on a Mittag-Leffler expansion for the regular Coulomb wave function, which may be of independent interest. Moreover, some complete monotonicity results concerning the Coulomb zeta functions and some interlacing properties of the zeros of Coulomb wave functions are given.

  11. Regularization and Complexity Control in Feed-forward Networks

    OpenAIRE

    Bishop, C. M.

    1995-01-01

    In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.

  12. Optimal Embeddings of Distance Regular Graphs into Euclidean Spaces

    NARCIS (Netherlands)

    F. Vallentin (Frank)

    2008-01-01

    htmlabstractIn this paper we give a lower bound for the least distortion embedding of a distance regular graph into Euclidean space. We use the lower bound for finding the least distortion for Hamming graphs, Johnson graphs, and all strongly regular graphs. Our technique involves semidefinite

  13. Degree-regular triangulations of torus and Klein bottle

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.

  14. Adaptive Regularization of Neural Networks Using Conjugate Gradient

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Andersen et al. (1997) and Larsen et al. (1996, 1997) suggested a regularization scheme which iteratively adapts regularization parameters by minimizing validation error using simple gradient descent. In this contribution we present an improved algorithm based on the conjugate gradient technique........ Numerical experiments with feedforward neural networks successfully demonstrate improved generalization ability and lower computational cost...

  15. Strictly-regular number system and data structures

    DEFF Research Database (Denmark)

    Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki

    2010-01-01

    We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...

  16. Inclusion Professional Development Model and Regular Middle School Educators

    Science.gov (United States)

    Royster, Otelia; Reglin, Gary L.; Losike-Sedimo, Nonofo

    2014-01-01

    The purpose of this study was to determine the impact of a professional development model on regular education middle school teachers' knowledge of best practices for teaching inclusive classes and attitudes toward teaching these classes. There were 19 regular education teachers who taught the core subjects. Findings for Research Question 1…

  17. The equivalence problem for LL- and LR-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus; Gecsec, F.

    It will be shown that the equivalence problem for LL-regular grammars is decidable. Apart from extending the known result for LL(k) grammar equivalence to LLregular grammar equivalence, we obtain an alternative proof of the decidability of LL(k) equivalence. The equivalence prob]em for LL-regular

  18. The Effects of Regular Exercise on the Physical Fitness Levels

    Science.gov (United States)

    Kirandi, Ozlem

    2016-01-01

    The purpose of the present research is investigating the effects of regular exercise on the physical fitness levels among sedentary individuals. The total of 65 sedentary male individuals between the ages of 19-45, who had never exercises regularly in their lives, participated in the present research. Of these participants, 35 wanted to be…

  19. Regular perturbations in a vector space with indefinite metric

    International Nuclear Information System (INIS)

    Chiang, C.C.

    1975-08-01

    The Klein space is discussed in connection with practical applications. Some lemmas are presented which are to be used for the discussion of regular self-adjoint operators. The criteria for the regularity of perturbed operators are given. (U.S.)

  20. Pairing renormalization and regularization within the local density approximation

    International Nuclear Information System (INIS)

    Borycki, P.J.; Dobaczewski, J.; Nazarewicz, W.; Stoitsov, M.V.

    2006-01-01

    We discuss methods used in mean-field theories to treat pairing correlations within the local density approximation. Pairing renormalization and regularization procedures are compared in spherical and deformed nuclei. Both prescriptions give fairly similar results, although the theoretical motivation, simplicity, and stability of the regularization procedure make it a method of choice for future applications

  1. Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears

    Science.gov (United States)

    Chen, Sau-Chin; Hu, Jon-Fan

    2015-01-01

    Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…

  2. Regularity conditions of the field on a toroidal magnetic surface

    International Nuclear Information System (INIS)

    Bouligand, M.

    1985-06-01

    We show that a field B vector which is derived from an analytic canonical potential on an ordinary toroidal surface is regular on this surface when the potential satisfies an elliptic equation (owing to the conservative field) subject to certain conditions of regularity of its coefficients [fr

  3. 47 CFR 76.614 - Cable television system regular monitoring.

    Science.gov (United States)

    2010-10-01

    ...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... in these bands of 20 uV/m or greater at a distance of 3 meters. During regular monitoring, any leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in the...

  4. Analysis of regularized Navier-Stokes equations, 2

    Science.gov (United States)

    Ou, Yuh-Roung; Sritharan, S. S.

    1989-01-01

    A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.

  5. 20 CFR 226.33 - Spouse regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Spouse regular annuity rate. 226.33 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.33 Spouse regular annuity rate. The final tier I and tier II rates, from §§ 226.30 and 226.32, are...

  6. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  7. Optimal behaviour can violate the principle of regularity.

    Science.gov (United States)

    Trimmer, Pete C

    2013-07-22

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.

  8. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  9. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  10. RPE vs. Percentage 1RM Loading in Periodized Programs Matched for Sets and Repetitions

    Directory of Open Access Journals (Sweden)

    Eric R. Helms

    2018-03-01

    Full Text Available Purpose: To investigate differences between rating of perceived exertion (RPE and percentage one-repetition maximum (1RM load assignment in resistance-trained males (19–35 years performing protocols with matched sets and repetitions differentiated by load-assignment.Methods: Participants performed squats then bench press 3x/weeks in a daily undulating format over 8-weeks. Participants were counterbalanced by pre-test 1RM then assigned to percentage 1RM (1RMG, n = 11; load-assignment via percentage 1RMs, or RPE groups (RPEG, n = 10; participant-selected loads to reach target RPE ranges. Ultrasonography determined pre and post-test pectoralis (PMT, and vastus lateralis muscle thickness at 50 (VLMT50 and 70% (VLMT70 femur-length.Results: Bench press (1RMG +9.64 ± 5.36; RPEG + 10.70 ± 3.30 kg, squat (1RMG + 13.91 ± 5.89; RPEG + 17.05 ± 5.44 kg and their combined-total 1RMs (1RMG + 23.55 ± 10.38; RPEG + 27.75 ± 7.94 kg increased (p < 0.05 in both groups as did PMT (1RMG + 1.59 ± 1.33; RPEG +1.90 ± 1.91 mm, VLMT50 (1RMG +2.13 ± 1.95; RPEG + 1.85 ± 1.97 mm and VLMT70 (1RMG + 2.40 ± 2.22; RPEG + 2.31 ± 2.27 mm. Between-group differences were non-significant (p > 0.05. Magnitude-based inferences revealed 79, 57, and 72% chances of mean small effect size (ES advantages for squat; ES 90% confidence limits (CL = 0.50 ± 0.63, bench press; ES 90% CL = 0.28 ± 0.73, and combined-total; ES 90% CL = 0.48 ± 0.68 respectively, in RPEG. There were 4, 14, and 6% chances 1RMG had a strength advantage of the same magnitude, and 18, 29, and 22% chances, respectively of trivial differences between groups.Conclusions: Both loading-types are effective. However, RPE-based loading may provide a small 1RM strength advantage in a majority of individuals.

  11. Percentage-based Author Contribution Index: a universal measure of author contribution to scientific articles.

    Science.gov (United States)

    Boyer, Stéphane; Ikeda, Takayoshi; Lefort, Marie-Caroline; Malumbres-Olarte, Jagoba; Schmidt, Jason M

    2017-01-01

    Deciphering the amount of work provided by different co-authors of a scientific paper has been a recurrent problem in science. Despite the myriad of metrics available, the scientific community still largely relies on the position in the list of authors to evaluate contributions, a metric that attributes subjective and unfounded credit to co-authors. We propose an easy to apply, universally comparable and fair metric to measure and report co-authors contribution in the scientific literature. The proposed Author Contribution Index (ACI) is based on contribution percentages provided by the authors, preferably at the time of submission. Researchers can use ACI to compare the contributions of different authors, describe the contribution profile of a particular researcher or analyse how contribution changes through time. We provide such an analysis based on contribution percentages provided by 97 scientists from the field of ecology who voluntarily responded to an online anonymous survey. ACI is simple to understand and to implement because it is based solely on percentage contributions and the number of co-authors. It provides a continuous score that reflects the contribution of one author as compared to the average contribution of all other authors. For example, ACI(i) = 3, means that author i contributed three times more than what the other authors contributed on average. Our analysis comprised 836 papers published in 2014-2016 and revealed patterns of ACI values that relate to career advancement. There are many examples of author contribution indices that have been proposed but none has really been adopted by scientific journals. Many of the proposed solutions are either too complicated, not accurate enough or not comparable across articles, authors and disciplines. The author contribution index presented here addresses these three major issues and has the potential to contribute to more transparency in the science literature. If adopted by scientific journals, it

  12. Compressive characteristics of closed-cell aluminum foams with different percentages of Er element

    Directory of Open Access Journals (Sweden)

    Wei-min Zhao

    2016-01-01

    Full Text Available In the present study, closed-cell aluminum foams with different percentages of erbium (Er element were successfully prepared. The distribution and existence form of erbium (Er element and its effect on the compressive properties of the foams were investigated. Results show that Er uniformly distributes in the cell walls in the forms of Al3Er intermetallic compound and Al-Er solid solutions. Compared with commercially pure aluminum foam, Er-containing foams possess higher micro-hardness, compressive strength and energy absorption capacity due to solid solution strengthening and second phase strengthening effects. Additionally, the amount of Er element should be controlled in the range of 0.10wt.%-0.50wt.% in order to obtain a good combination of compressive strength and energy absorption properties.

  13. Relationship between percentage of body fat and anthropometric indicators in individuals attending a gym

    Directory of Open Access Journals (Sweden)

    T. Grossl

    2010-01-01

    Full Text Available The aim of this study was to investigate the relationship between percentage of body fat (% BF and anthropometric indicators in individuals attending a gym. Four hundred and thirty eight individuals, 195 men and 243 women, from 18 to 50 years of age took part in this study. The % BF was estimated by the skinfold method. The following anthropometric indicators were assessed: waist circumference, abdomen circumference (AC, waist-to-hip ratio, body mass index (BMI and waist-height ratio. Linear Pearson correlation and simple linear regression analysis were used to investigate the relationship between variables. For women, BMI strongly correlated with % BF (r = .73, whereas for males, AC showed high correlation with % BF (r = .73. With varying degrees of magnitude, there were significant correlations between all of the anthropometric indicators analyzed and % BF.

  14. Percentage of vestibular dysfunction in 361 elderly citizens responding to a newspaper advertisement

    DEFF Research Database (Denmark)

    Brandt, Michael Smærup; Grönvall, Erik; Mørch, Marianne Metz

    Percentage of Vestibular Dysfunction in 361 Elderly Citizens Responding to a Newspaper Advertisement. Brandt M, Grönvall E, Henriksen JJ, Larsen SB, Læssøe U, Mørch MM, Damsgaard EM Introduction Elderly patients with vestibular dysfunction have an eight-fold increased risk of falling compared...... advertisement. Method To recruit elderly citizens with dizziness we advertised in a local newspaper. A telephone interview with the respondents was done by a physiotherapist (PT). If the PT concluded that the reason for the dizziness could be vestibular dysfunction the citizen was invited to further...... Department, Aarhus University Hospital. Results 361 elderly citizens responded to the advertisement. 8 patients had alcohol problems, 14 had significantly impaired vision, 42 had evidence of orthostatic hypotension, 49 didn’t want to participate, 50 had evidence of Benign Paroxysmal Positional Vertigo (BPPV...

  15. Verification of performance of the power percentage channel for the TRIGA Mark III reactor

    International Nuclear Information System (INIS)

    Paredes G, L.C.

    1991-10-01

    It was found that the response that gives the power percent channel is correct, given the positive results of the independent tests that were carried out to the gamma ionization chamber and the electronics associated to this channel. Regarding the gamma chamber, it was verified that the appropriate operation voltage is 800 V, and that for operations in stationary state to 1 MW during 2 h, presented maximum variations of 3%. Also it was determined that the degradation percentage in the sensitivity to the gamma radiation is 10.24%, because this chamber has not been changed since the reactor enters in operation at November 8, 1968 by what will be considered to short term the substitution of the same one due to the burnt that it presents. In connection with the electronics of the channel, it was simulated the response of the chamber for intervals of 6 h and in the 4 analyzed cases the response of the channel was lineal. (Author)

  16. Ultrasonic characterization of GRC with high percentage of fly ash substitution.

    Science.gov (United States)

    Genovés, V; Gosálbez, J; Miralles, R; Bonilla, M; Payá, J

    2015-07-01

    New applications of non-destructive techniques (NDT) with ultrasonic tests (attenuation and velocity by means of ultrasonic frequency sweeps) have been developed for the characterization of fibre-reinforced cementitious composites. According to new lines of research on glass-fibre reinforced cement (GRC) matrix modification, two similar GRC composites with high percentages of fly ash and different water/binder ratios will be studied. Conventional techniques have been used to confirm their low Ca(OH)(2) content (thermogravimetry), fibre integrity (Scanning Electron Microscopy), low porosity (Mercury Intrusion Porosimetry) and good mechanical properties (compression and four points bending test). Ultrasound frequency sweeps allowed the estimation of the attenuation and pulse velocity as functions of frequency. This ultrasonic characterization was correlated successfully with conventional techniques. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Unit Price and Cost Estimation Equations through Items Percentage of Construction Works in a Desert Area

    Directory of Open Access Journals (Sweden)

    Kadhim Raheem

    2015-02-01

    Full Text Available This research will cover different aspects of estimating process of construction work in a desert area. The inherent difficulties which accompany the cost estimating of the construction works in desert environment in a developing country, will stem from the limited information available, resources scarcity, low level of skilled workers, the prevailing severe weather conditions and many others, which definitely don't provide a fair, reliable and accurate estimation. This study tries to present unit price to estimate the cost in preliminary phase of a project. Estimations are supported by developing mathematical equations based on the historical data of maintenance, new construction of managerial and school projects. Meanwhile, the research has determined the percentage of project items, in such a remote environment. Estimation equations suitable for remote areas have been formulated. Moreover, a procedure for unite price calculation is concluded.

  18. Strenuous exercise decreases the percentage of type 1 T cells in the circulation

    DEFF Research Database (Denmark)

    Steensberg, A; Toft, A D; Bruunsgaard, H

    2001-01-01

    -gamma and interleukin (IL)-2, and type 2 (Th2 and Tc2) cells, which produce IL-4. The question addressed in the present study was whether exercise affected the relative balance between the circulating levels of these cytokine-producing T cells. Nine male runners performed treadmill running for 2.5 h at 75% of maximal...... oxygen consumption. The intracellular expression of cytokines was detected following stimulation with ionomycin and phorbol 12-myristate 13-acetate in blood obtained before, during, and after exercise. The percentage of type 1 T cells in the circulation was suppressed at the end of exercise and 2 h after......Prolonged strenuous exercise is followed by a temporary functional immune impairment. Low numbers of CD4+ T helper (Th) and CD8+ T cytotoxic (Tc) cells are found in the circulation. These cells can be divided according to their cytokine profile into type 1 (Th1 and Tc1), which produce interferon...

  19. Detection of erythropoietin misuse by the Athlete Biological Passport combined with reticulocyte percentage

    DEFF Research Database (Denmark)

    Bejder, Jacob; Aachmann-Andersen, Niels Jacob; Bonne, Thomas Christian

    2016-01-01

    The sensitivity of the adaptive model of the Athlete Biological Passport (ABP) and reticulocyte percentage (ret%) in detection of recombinant human erythropoietin (rHuEPO) misuse was evaluated using both a long-term normal dose and a brief high dose treatment regime. Sixteen subjects received...... initiation. The ABP based on haemoglobin concentration ([Hb]) and OFF-hr score ([Hb] - 60×√ret%) yielded atypical profiles following both normal-dose and high-dose treatment (0 %, 31 %, 13 % vs. 21 %, 33 %, 20 % at days 4, 11, and 25 after normal and high dose, respectively). Including ret% as a stand...... will present an atypical ABP profile. Including ret% as a stand-alone parameter improves the sensitivity two-fold....

  20. Determination of electron clinical spectra from percentage depth dose (PDD) curves by classical simulated annealing method

    International Nuclear Information System (INIS)

    Visbal, Jorge H. Wilches; Costa, Alessandro M.

    2016-01-01

    Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the electron spectra should be considered to improve the accuracy of dose calculation since the shape of PDP curve depends of way how radiation particles deposit their energy in patient/phantom, that is, the spectrum. Exist three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)

  1. A Data Matrix Method for Improving the Quantification of Element Percentages of SEM/EDX Analysis

    Science.gov (United States)

    Lane, John

    2009-01-01

    A simple 2D M N matrix involving sample preparation enables the microanalyst to peer below the noise floor of element percentages reported by the SEM/EDX (scanning electron microscopy/ energy dispersive x-ray) analysis, thus yielding more meaningful data. Using the example of a 2 3 sample set, there are M = 2 concentration levels of the original mix under test: 10 percent ilmenite (90 percent silica) and 20 percent ilmenite (80 percent silica). For each of these M samples, N = 3 separate SEM/EDX samples were drawn. In this test, ilmenite is the element of interest. By plotting the linear trend of the M sample s known concentration versus the average of the N samples, a much higher resolution of elemental analysis can be performed. The resulting trend also shows how the noise is affecting the data, and at what point (of smaller concentrations) is it impractical to try to extract any further useful data.

  2. Changes in Wine Aroma Composition According to Botrytized Berry Percentage: A Preliminary Study on Amarone Wine

    Directory of Open Access Journals (Sweden)

    Bruno Fedrizzi

    2011-01-01

    Full Text Available The aim of this study is to evaluate the impact of Botrytis cinerea, a noble rot, on the aroma components of Amarone, a dry red wine produced from withered grapes. A comparative analysis of wines obtained from manually selected healthy and botrytized grapes was done. Aroma analysis revealed that most compounds varied significantly according to the percentage of botrytized berries utilized. Botrytized wines contained less fatty acids and more fruity acetates than healthy wines. A positive correlation between the content of N-(3-methylbutylacetamide, sherry lactone and an unidentified compound and the level of fungal infection was also observed. The results indicate that noble rot can significantly modify important aroma components of Amarone wine.

  3. A novel body circumferences-based estimation of percentage body fat.

    Science.gov (United States)

    Lahav, Yair; Epstein, Yoram; Kedem, Ron; Schermann, Haggai

    2018-03-01

    Anthropometric measures of body composition are often used for rapid and cost-effective estimation of percentage body fat (%BF) in field research, serial measurements and screening. Our aim was to develop a validated estimate of %BF for the general population, based on simple body circumferences measures. The study cohort consisted of two consecutive samples of health club members, designated as 'development' (n 476, 61 % men, 39 % women) and 'validation' (n 224, 50 % men, 50 % women) groups. All subjects underwent anthropometric measurements as part of their registration to a health club. Dual-energy X-ray absorptiometry (DEXA) scan was used as the 'gold standard' estimate of %BF. Linear regressions where used to construct the predictive equation (%BFcal). Bland-Altman statistics, Lin concordance coefficients and percentage of subjects falling within 5 % of %BF estimate by DEXA were used to evaluate accuracy and precision of the equation. The variance inflation factor was used to check multicollinearity. Two distinct equations were developed for men and women: %BFcal (men)=10·1-0·239H+0·8A-0·5N; %BFcal (women)=19·2-0·239H+0·8A-0·5N (H, height; A, abdomen; N, neck, all in cm). Bland-Altman differences were randomly distributed and showed no fixed bias. Lin concordance coefficients of %BFcal were 0·89 in men and 0·86 in women. About 79·5 % of %BF predictions in both sexes were within ±5 % of the DEXA value. The Durnin-Womersley skinfolds equation was less accurate in our study group for prediction of %BF than %BFcal. We conclude that %BFcal offers the advantage of obtaining a reliable estimate of %BF from simple measurements that require no sophisticated tools and only a minimal prior training and experience.

  4. Effect of Natural Sand Percentages on Fatigue Life of Asphalt Concrete Mixture

    Directory of Open Access Journals (Sweden)

    Nahla Yassub Ahmed

    2016-03-01

    Full Text Available The design of a flexible pavement requires the knowledge of the material properties which are characterized by stiffness and fatigue resistance. The fatigue resistance relates the number of load cycles to failure with the strain level applied to the asphalt mixture. The main objective of this research is the evaluation of the fatigue life of asphalt mixtures by using two types of fine aggregate having different percentages. In this study, two types of fine aggregate were used natural sand (desert sand and crushed sand. The crushed sand was replaced by natural sand (desert sand with different percentages (0%, 25%, 75% and 100% by the weight of the sand (passing sieve No.8 and retained on sieve No.200 and one type of binder (40/50 penetration from Al-Daurah refinery. The samples of beams were tested by four point bending beam fatigue test at the control strain mode (250, 500 and 750 microstrain while the loading frequency (5Hz and testing temperature (20oC according to (AASHTO T321. The experimental work showed that fatigue life (Nf and initial flexural stiffness increased when control strain decreased for asphalt mixtures. Acceptable fatigue life at 750 microstrain was obtained with asphalt concrete mixtures containing 100% crushed sand as well as asphalt concrete contained 25% natural sand. The asphalt concrete contained 100% and 75% of natural sand exhibited high fatigue life at low level of microstrain (250. The main conclusion of this study found that best proportion of natural sand to be added to an asphaltic concrete mixture is falling within the range (0% and 25% by weight of fraction (passing No.8 and retained on No.200 sieve .

  5. [Percentage of uric acid calculus and its metabolic character in Dongjiang River valley].

    Science.gov (United States)

    Chong, Hong-Heng; An, Geng

    2009-02-15

    To study the percentage of uric acid calculus in uroliths and its metabolic character in Dongjiang River valley. To analyze the chemical composition of 290 urinary stones by infrared (IR) spectroscopy and study the ratio changes of uric acid calculus. Uric acid calculus patients and healthy people were studied. Personal characteristics, dietary habits were collected. Conditional logistic regression was used for data analysis and studied the dietary risk factors of uric acid calculus. Patients with uric acid calculus, calcium oxalate and those without urinary calculus were undergone metabolic evaluation analysis. The results of uric acid calculus patients compared to another two groups to analysis the relations between the formation of uric acid calculus and metabolism factors. Uric acid calculi were found in 53 cases (18.3%). The multiple logistic regression analysis suggested that low daily water intake, eating more salted and animal food, less vegetable were very closely associated with uric acid calculus. Comparing to calcium oxalate patients, the urine volume, the value of pH, urine calcium, urine oxalic acid were lower, but uric acid was higher than it. The value of pH, urine oxalic acid and citric acid were lower than them, but uric acid and urine calcium were higher than none urinary calculus peoples. Blood potassium and magnesium were lower than them. The percentage of uric acid stones had obvious advanced. Less daily water intake, eating salted food, eating more animal food, less vegetables and daily orange juice intake, eating sea food are the mainly dietary risk factors to the formation of uric acid calculus. Urine volume, the value of pH, citric acid, urine calcium, urine uric acid and the blood natrium, potassium, magnesium, calcium, uric acid have significant influence to the information of uric acid stones.

  6. Using saturation water percentage data to predict mechanical composition of soils

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.; Okafor, D.O.

    1995-04-01

    One hundred and sixty-six soil samples representing eleven textural classes and having wide variations in organic matter (OM) contents and other physico-chemical properties were collected from different locations in southeastern Nigeria to study the relationship between mechanical composition and saturation water percentage (SP). The objective was to develop a prediction model for silt + clay (SC) and clay (C) contents of these soils using the SP values. The magnitude of the correlation coefficients (r) between SC or C and SP was dependent on the amount of organic matter (OM) present in the soils. For soils with ≤ 1.00% OM, the correlation (r) between SC and SP was 0.9659 (p ≤ 0.001) and that between C and SP was 0.9539 (p ≤ 0.001). For soils with ≥ 2.00% OM, the 'r' values were generally low, varying between 0.5320 and 0.2665 for SC and 0.6008 and 0.3000 for C. The best-fit regression models for predicting SC and C were developed with soils having ≤ 1.00% OM. An independent data set from 25 soil samples collected from other parts of the study area was used to test the predictive ability of the best-fit models. These models predicted SC and C accurately in soils having between 0.28 and 1.10% OM, but poorly in soils having between 1.31 and 3.91% OM. These results show that the use of saturation water percentage to predict the mechanical composition of soils is most reliable for soils with low (≤ 1.00%) OM contents. (author). 18 refs, 2 figs, 5 tabs

  7. Three regularities of recognition memory: the role of bias.

    Science.gov (United States)

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  8. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  9. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    Science.gov (United States)

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  10. 26 CFR 1.613-7 - Application of percentage depletion rates provided in section 613(b) to certain taxable years...

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Application of percentage depletion rates... TAXES (CONTINUED) Natural Resources § 1.613-7 Application of percentage depletion rates provided in... depletion rate specified in section 613 in respect of any mineral property (within the meaning of the 1939...

  11. 20 CFR Appendix Vi to Subpart C of... - Percentage of Automatic Increases in Primary Insurance Amounts Since 1978

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Percentage of Automatic Increases in Primary... ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. VI Appendix VI to Subpart C of Part 404—Percentage of Automatic Increases in...

  12. 24 CFR 1000.238 - What percentage of the IHBG funds can be used for administrative and planning expenses?

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false What percentage of the IHBG funds can be used for administrative and planning expenses? 1000.238 Section 1000.238 Housing and Urban... ACTIVITIES Indian Housing Plan (IHP) § 1000.238 What percentage of the IHBG funds can be used for...

  13. SURFACE ELECTROMYOGRAPHY OF MASSETER AND TEMPORAL MUSCLES WITH USE PERCENTAGE WHILE CHEWING ON CANDIDATES FOR GASTROPLASTY.

    Science.gov (United States)

    Santos, Andréa Cavalcante Dos; Silva, Carlos Antonio Bruno da

    Surface electromyography identifies changes in the electrical potential of the muscles during each contraction. The percentage of use is a way to treat values enabling comparison between groups. To analyze the electrical activity and the percentage of use of masseter and temporal muscles during chewing in candidates for gastric bypass. It was used Surface Electromyography Miotool 200,400 (Miotec (r), Porto Alegre/RS, Brazil) integrated with Miograph 2.0 software, involving patients between 20-40 years old. Were included data on electrical activity simultaneously and in pairs of temporal muscle groups and masseter at rest, maximum intercuspation and during the chewing of food previously classified. Were enrolled 39 patients (59 women), mean age 27.1+/-5.7. The percentage of use focused on temporal muscle, in a range of 11-20, female literacy (n=11; 47.82) on the left side and 15 (65.21) on the right-hand side. In the male, nine (56.25) at left and 12 (75.00) on the right-hand side. In masseter, also in the range of 11 to 20, female literacy (n=10; 43.48) on the left side and 11 (47.83) on the right-hand side. In the male, nine (56.25) at left and eight (50.00) on the right-hand side. 40-50% of the sample showed electrical activity in muscles (masseter and temporal) with variable values, and after processing into percentage value, facilitating the comparison of load of used electrical activity between the group, as well as usage percentage was obtained of muscle fibers 11-20% values involving, representing a range that is considered as a reference to the group studied. The gender was not a variable. A eletromiografia de superfície identifica variações dos potenciais elétricos dos músculos durante cada contração realizada. O percentual de uso é uma forma de tratar valores possibilitando comparação entre grupos. Analisar a atividade elétrica e o percentual de uso dos músculos masséteres e temporais durante a mastigação em candidatos à gastroplastia

  14. Closedness type regularity conditions in convex optimization and beyond

    Directory of Open Access Journals (Sweden)

    Sorin-Mihai Grad

    2016-09-01

    Full Text Available The closedness type regularity conditions have proven during the last decade to be viable alternatives to their more restrictive interiority type counterparts, in both convex optimization and different areas where it was successfully applied. In this review article we de- and reconstruct some closedness type regularity conditions formulated by means of epigraphs and subdifferentials, respectively, for general optimization problems in order to stress that they arise naturally when dealing with such problems. The results are then specialized for constrained and unconstrained convex optimization problems. We also hint towards other classes of optimization problems where closedness type regularity conditions were successfully employed and discuss other possible applications of them.

  15. Capped Lp approximations for the composite L0 regularization problem

    OpenAIRE

    Li, Qia; Zhang, Na

    2017-01-01

    The composite L0 function serves as a sparse regularizer in many applications. The algorithmic difficulty caused by the composite L0 regularization (the L0 norm composed with a linear mapping) is usually bypassed through approximating the L0 norm. We consider in this paper capped Lp approximations with $p>0$ for the composite L0 regularization problem. For each $p>0$, the capped Lp function converges to the L0 norm pointwisely as the approximation parameter tends to infinity. We point out tha...

  16. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  17. Likelihood ratio decisions in memory: three implied regularities.

    Science.gov (United States)

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  18. Fluctuations of quantum fields via zeta function regularization

    International Nuclear Information System (INIS)

    Cognola, Guido; Zerbini, Sergio; Elizalde, Emilio

    2002-01-01

    Explicit expressions for the expectation values and the variances of some observables, which are bilinear quantities in the quantum fields on a D-dimensional manifold, are derived making use of zeta function regularization. It is found that the variance, related to the second functional variation of the effective action, requires a further regularization and that the relative regularized variance turns out to be 2/N, where N is the number of the fields, thus being independent of the dimension D. Some illustrating examples are worked through. The issue of the stress tensor is also briefly addressed

  19. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    Science.gov (United States)

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  20. Regularization theory for ill-posed problems selected topics

    CERN Document Server

    Lu, Shuai

    2013-01-01

    Thismonograph is a valuable contribution to thehighly topical and extremly productive field ofregularisationmethods for inverse and ill-posed problems. The author is an internationally outstanding and acceptedmathematicianin this field. In his book he offers a well-balanced mixtureof basic and innovative aspects.He demonstrates new,differentiatedviewpoints, and important examples for applications. The bookdemontrates thecurrent developments inthe field of regularization theory,such as multiparameter regularization and regularization in learning theory. The book is written for graduate and PhDs

  1. A regulatory adjustment process for the determination of the optimal percentage requirement in an electricity market with Tradable Green Certificates

    International Nuclear Information System (INIS)

    Currier, Kevin M.

    2013-01-01

    A system of Tradable Green Certificates (TGCs) is a market-based subsidy scheme designed to promote electricity generation from renewable energy sources such as wind power. Under a TGC system, the principal policy instrument is the “percentage requirement,” which stipulates the percentage of total electricity production (“green” plus “black”) that must be obtained from renewable sources. In this paper, we propose a regulatory adjustment process that a regulator can employ to determine the socially optimal percentage requirement, explicitly accounting for environmental damages resulting from black electricity generation. - Highlights: • A Tradable Green Certificate (TGC) system promotes energy production from renewable sources. • We consider an electricity oligopoly operated under a TGC system. • Welfare analysis must account for damages from “black” electricity production. • We characterize the welfare maximizing (optimal) “percentage requirement.” • We present a regulatory adjustment process that computes the optimal percentage requirement iteratively

  2. Periodontal disease and percentage of calories from fat using national data.

    Science.gov (United States)

    Hamasaki, T; Kitamura, M; Kawashita, Y; Ando, Y; Saito, T

    2017-02-01

    The association between periodontal disease and nutrient intake was examined using linked data from the 2005 National Health and Nutrition Survey, the Comprehensive Survey of Living Conditions and the Survey of Dental Diseases from the same year 'using linked data from the National Health and Nutrition Survey, the Comprehensive Survey of Living Conditions and the Survey of Dental Diseases, all from 2005'. There has been increasing focus on the importance of nutritional factors in disease in recent years, but very few studies in Japan have looked at the association between periodontal disease and nutrients. Therefore, in the present study we investigated factors associated with periodontal disease, particularly in terms of nutrient intake. Data from 3043 individuals, ≥ 20 years of age (the original study sample comprised 4873 individuals, but those younger than 20 years of age and pregnant women were excluded from the present study) were compiled from linked responses to these three surveys from the same year. Permission to use the data was obtained from the Lifestyle-Related Diseases Control General Affairs Division of the Ministry of Health, Labor, and Welfare, Japan. Information including basic attributes, family structure, economic status, physical condition, lifestyle habits, diet, dental habits, blood data, intake of foods (including the categories of food) and nutrient-related information were obtained from the linked data. The individual maximum Community Periodontal Index (CPI) was used as an index of periodontal disease. Subjects were divided, according to maximum CPI, into groups in which CPI = 0-2 or CPI = 3-4, and associations between CPI and basic attributes, family structure, economic status, physical condition, lifestyle habits, diet, blood data and food intake were analyzed. Multivariate analysis revealed that the percentage of calories from fat was a nutrient factor associated with periodontal disease, with the percentage of calories from fat

  3. On the theory of drainage area for regular and non-regular points

    Science.gov (United States)

    Bonetti, S.; Bragg, A. D.; Porporato, A.

    2018-03-01

    The drainage area is an important, non-local property of a landscape, which controls surface and subsurface hydrological fluxes. Its role in numerous ecohydrological and geomorphological applications has given rise to several numerical methods for its computation. However, its theoretical analysis has lagged behind. Only recently, an analytical definition for the specific catchment area was proposed (Gallant & Hutchinson. 2011 Water Resour. Res. 47, W05535. (doi:10.1029/2009WR008540)), with the derivation of a differential equation whose validity is limited to regular points of the watershed. Here, we show that such a differential equation can be derived from a continuity equation (Chen et al. 2014 Geomorphology 219, 68-86. (doi:10.1016/j.geomorph.2014.04.037)) and extend the theory to critical and singular points both by applying Gauss's theorem and by means of a dynamical systems approach to define basins of attraction of local surface minima. Simple analytical examples as well as applications to more complex topographic surfaces are examined. The theoretical description of topographic features and properties, such as the drainage area, channel lines and watershed divides, can be broadly adopted to develop and test the numerical algorithms currently used in digital terrain analysis for the computation of the drainage area, as well as for the theoretical analysis of landscape evolution and stability.

  4. Self- and rater-assessed effectiveness of "thinking-aloud" and "regular" morning report to intensify young physicians' clinical skills.

    Science.gov (United States)

    Hsu, Hui-Chi; Lee, Fa-Yauh; Yang, Ying-Ying; Tsao, Yen-Po; Lee, Wen-Shin; Chuang, Chiao-Lin; Chang, Ching-Chih; Huang, Chia-Chang; Huang, Chin-Chou; Ho, Shung-Tai

    2015-09-01

    This study compared the effects of the "thinking aloud" (TA) morning report (MR), which is characterized by sequential and interactive case discussion by all participants, with "regular" MR for clinical skill training of young physicians. Between February 2011 and February 2014, young physicians [including postgraduate year-1 (PGY1) residents, interns, and clerks) from our hospital were sequentially enrolled and followed for 3 months. The self- and rater-assessed educational values of two MR models for building up clinical skills of young physicians were compared. The junior (intern and clerk) attendees had higher self-assessed educational values scores and reported post-training application frequency of skills trained by TA MR compared with the senior (PGY1 resident) attendees. Higher average and percentage of increased overall rater-assessed OSCE scores were noted among the regular MR senior attendees and TA MR junior attendees than in their corresponding control groups (regular MR junior attendees and TA MR senior attendees). Interestingly, regular MRs provided additional beneficial effects for establishing the "professionalism, consulting skills and organization efficiency" aspects of clinical skills of senior/junior attendees. Moreover, senior and junior attendees benefited the most by participating in seven sessions of regular MR and TA MR each month, respectively. TA MR effectively trains junior attendees in basic clinical skills, whereas regular MR enhances senior attendees' "work reports, professionalism, organizational efficiency, skills in dealing with controversial and professional issues." Undoubtedly, all elements of the two MR models should be integrated together to ensure patient safety and good discipline among young physicians. Copyright © 2015. Published by Elsevier Taiwan.

  5. Automatic Constraint Detection for 2D Layout Regularization.

    Science.gov (United States)

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  6. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi

    2014-01-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both

  7. A Regularized Algorithm for the Proximal Split Feasibility Problem

    Directory of Open Access Journals (Sweden)

    Zhangsong Yao

    2014-01-01

    Full Text Available The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.

  8. Anaemia in Patients with Diabetes Mellitus attending regular ...

    African Journals Online (AJOL)

    Anaemia in Patients with Diabetes Mellitus attending regular Diabetic ... Nigerian Journal of Health and Biomedical Sciences ... some patients may omit important food items in their daily diet for fear of increasing their blood sugar level.

  9. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong

    2015-09-18

    In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.

  10. Body composition, disordered eating and menstrual regularity in a ...

    African Journals Online (AJOL)

    Body composition, disordered eating and menstrual regularity in a group of South African ... e between body composition and disordered eating in irregular vs normal menstruating athletes. ... measured by air displacement plethysmography.

  11. A new approach to nonlinear constrained Tikhonov regularization

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti

    2011-01-01

    operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a

  12. Supporting primary school teachers in differentiating in the regular classroom

    NARCIS (Netherlands)

    Eysink, Tessa H.S.; Hulsbeek, Manon; Gijlers, Hannie

    Many primary school teachers experience difficulties in effectively differentiating in the regular classroom. This study investigated the effect of the STIP-approach on teachers' differentiation activities and self-efficacy, and children's learning outcomes and instructional value. Teachers using

  13. Lavrentiev regularization method for nonlinear ill-posed problems

    International Nuclear Information System (INIS)

    Kinh, Nguyen Van

    2002-10-01

    In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)

  14. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin; Dai, Wei; Schuster, Gerard T.

    2013-01-01

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity

  15. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong; Nan, Liangliang; Yan, Dongming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2015-01-01

    plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing

  16. Quantifying the percentage of methane formation via acetoclastic and syntrophic acetate oxidation pathways in anaerobic digesters.

    Science.gov (United States)

    Jiang, Ying; Banks, Charles; Zhang, Yue; Heaven, Sonia; Longhurst, Philip

    2018-01-01

    Ammonia concentration is one of the key factors influencing the methanogenic community composition and dominant methanogenic pathway in anaerobic digesters. This study adopted a radiolabelling technique using [2- 14 C] acetate to investigate the relationship between total ammonia nitrogen (TAN) and the methanogenic pathway. The radiolabelling experiments determined the ratio of 14 CO 2 and 14 CH 4 in the biogas which was used to quantitatively determine the percentage of CH 4 derived from acetoclastic and syntrophic acetate oxidation routes, respectively. This technique was performed on a selection of mesophilic digesters representing samples of low to high TAN concentrations (0.2-11.1gkg -1 wet weight). In high TAN digesters, the ratio between 14 CO 2 and 14 CH 4 was in the range 2.1-3.0; indicating 68-75% of methane was produced via the hydrogenotrophic route; whereas in low ammonia samples the ratio was 0.1-0.3, indicating 9-23% of methane was produced by the hydrogenotrophic route. These findings have been confirmed further by phylogenetic studies. Copyright © 2017. Published by Elsevier Ltd.

  17. The percentage of coral reef cover in Saonek Kecil Island, Raja Ampat, West Papua

    Science.gov (United States)

    Wiguna, D. A.; Masithah, E. D.; Manan, A.

    2018-04-01

    Raja Ampat archipelago is located in the heart of the world’s coral triangle which is the center of the richest tropical marine biodiversity in the world. The Saonek Kecil Island has a location close to the Waisai Harbour (±2 km of sea routes). The Island that has no inhabitants and has a location close to harbour activities potentially damage coral reefs. This research was conducted by Line Intercept Transect (LIT) method that calculate the length of each colony form of growth (life form) of coral reefs on the line transect which stretched along the 50 metres parallel to the coastline at each station to obtain the percentage cover data, diversity index, uniformity index, and dominance index. The results of research precentage cover of coral reeef in the waters of Small Saonek Island reach 68.80% – 79.30% by category according to the decision of the Minister of State for the Environment number 4 of 2001 about the damage the reefs criteria included in the category of good – very good. As for the value of diversity index (H’) of 0.487 – 0.675 (medium-high), uniformity index (J) 0.437 – 0.606 (medium-high), and dominance index (C) 0.338 – 0.502 (medium-high).

  18. Analysis of the percentage voids of test and field specimens using computerized tomography

    International Nuclear Information System (INIS)

    Braz, D.; Lopes, R.T.; Motta, L.M.G. da

    1999-01-01

    Computerized tomography has been an excellent tool of analysis of asphaltics mixtures, because it allows comparison of the quality and integrity of test and field specimens. It was required to detect and follow the evolution of cracks, when these mixtures were submitted to fatigue tests, and also helping to interpret the distribution of tensions and deformations which occur in the several types of solicitations imposed to the mixtures. Comparing the medium values of percentage voids obtained from tomographic images with the project's values, it can be observed that the values of test and field specimens for the wearing course are closer to the ones of the project than the ones of the binder. It can be verified that the wearing course specimens always present a distribution of the aggregate, and voids quite homogeneously in the whole profile of the sample, while the binder specimens show an accentuated differentiation of the same factors in the several heights of the sample. Therefore, when choosing a slice for tomography, these considerations should be taken into account

  19. A study of percentage body fat in children via dual energy X-ray absorptiometry (DEXA)

    International Nuclear Information System (INIS)

    Kawano, Shoji; Yagi, Shinichi; Fujino, Mitsuyoshi; Tanaka, Hiroyuki; Morita, Tetsuro; Fukunaga, Masao

    1994-01-01

    Percentage body fat was measured using dual energy X-ray absorptiometry (DEXA), bioelectrical impedance analysis (BIA) and skin fold calipers on 26 children (nine in obesity group, 12 in healthy group and 5 in steroid treated group). Mean percent body fat did not differ significantly between methods in the whole subjects as well as the healthy group and the steroid treated group. However, the mean percent body fat using skin fold caliper was higher for the obesity group than the other two. The measurements of all cases in the obesity group by DEXA were higher than those of BIA. There were high correlations among the percent body fat obtained by each technique. According to the analysis of mean regional percent fat, the percent fat of legs was the highest in the healthy and steroid treated group, while there was no regional difference in the obesity group. It should be possible to classify each case in the obesity group into upper segment and lower segment obesity by DEXA. (author)

  20. What kind of Relationship is Between Body Mass Index and Body Fat Percentage?

    Science.gov (United States)

    Kupusinac, Aleksandar; Stokić, Edita; Sukić, Enes; Rankov, Olivera; Katić, Andrea

    2017-01-01

    Although body mass index (BMI) and body fat percentage (B F %) are well known as indicators of nutritional status, there are insuficient data whether the relationship between them is linear or not. There are appropriate linear and quadratic formulas that are available to predict B F % from age, gender and BMI. On the other hand, our previous research has shown that artificial neural network (ANN) is a more accurate method for that. The aim of this study is to analyze relationship between BMI and B F % by using ANN and big dataset (3058 persons). Our results show that this relationship is rather quadratic than linear for both gender and all age groups. Comparing genders, quadratic relathionship is more pronounced in women, while linear relationship is more pronounced in men. Additionaly, our results show that quadratic relationship is more pronounced in old than in young and middle-age men and it is slightly more pronounced in young and middle-age than in old women.

  1. Reference values for serum ferritin and percentage of transferrin saturation in Korean children and adolescents.

    Science.gov (United States)

    Oh, Hea Lin; Lee, Jun Ah; Kim, Dong Ho; Lim, Jung Sub

    2018-03-01

    Ferritin reference values vary by age, gender, and ethnicity. We aimed to determine reference values of serum ferritin (SF) and the percentage of transferrin saturation (TSAT) for Korean children and adolescents. We analyzed data from 2,487 participants (1,311 males and 1,176 females) aged 10-20 years from the Korea National Health and Nutrition Examination Survey (2010-2012). We calculated age- and gender-stratified means and percentile values for SF and TSAT. We first plotted mean SF and TSAT by gender and according to age. In males, mean SF tended to be relatively constant among participants aged 10 to 14 years, with an upward trend thereafter. Mean SF trended downward among female participants until the age of 15 years and remained constant thereafter. Thus, significant gender differences in ferritin exist from the age of 14 years. High levels of SF were associated with obesity, and lower SF levels were associated with anemia and menarche status. We established reference values of SF and TSAT according to age and gender. The reference values for SF calculated in this study can be used to test the association between SF values and other defined diseases in Korean children and adolescents.

  2. Novel equations to predict body fat percentage of Brazilian professional soccer players: A case study

    Directory of Open Access Journals (Sweden)

    Luiz Fernando Novack

    2014-12-01

    Full Text Available This study analyzed classical and developed novel mathematical models to predict body fat percentage (%BF in professional soccer players from the South Brazilian region using skinfold thicknesses measurement. Skinfolds of thirty one male professional soccer players (age of 21.48 ± 3.38 years, body mass of 79.05 ± 9.48 kg and height of 181.97 ± 8.11 cm were introduced into eight mathematical models from the literature for the prediction of %BF; these results were then compared to Dual-energy X-ray Absorptiometry (DXA. The classical equations were able to account from 65% to 79% of the variation of %BF in DXA. Statistical differences between most of the classical equations (seven of the eight classic equations and DXA were found, rendering their widespread use in this population useless. We developed three new equations for prediction of %BF with skinfolds from: axils, abdomen, thighs and calves. Theses equations accounted for 86.5% of the variation in %BF obtained with DXA.

  3. Age, gender, and percentage of circulating osteoprogenitor (COP) cells: The COP Study.

    Science.gov (United States)

    Gunawardene, Piumali; Al Saedi, Ahmed; Singh, Lakshman; Bermeo, Sandra; Vogrin, Sara; Phu, Steven; Suriyaarachchi, Pushpa; Pignolo, Robert J; Duque, Gustavo

    2017-10-01

    Circulating osteoprogenitor (COP) cells are blood-borne cells which express a variety of osteoblastic markers and are able to form bone nodules in vivo. Whereas a high percentage of COP cells (%COP) is associated with vascular calcification, low %COP has been associated with disability and frailty. However, the reference range of %COP in age- and gender-matching populations, and the age-related changes in %COP remain unknown. A cross-sectional study was undertaken in 144 healthy volunteers in Western Sydney (20-90year-old, 10 male and 10 female subjects per decade). %COP was quantified by flow cytometry. A high inter-and intra-rater reliability was found. In average, in this healthy population average of %COP was 0.42. There was no significant difference in %COP among the age groups. Similarly, no significant difference was found in %COP with gender, weight, height or BMI. In addition, we identified a normal reference range of %COP of 0.1-3.8%. In conclusion, in addition to the identification of steady levels of COP cells with age, we also identified a normal reference range of %COP, which could be used in future studies looking at musculoskeletal diseases in older populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. [EVALUATION OF THE BODY ADIPOSITY INDEX IN PREDICTING PERCENTAGE BODY FAT AMONG COLOMBIAN ADULTS].

    Science.gov (United States)

    González-Ruíz, Katherine; Correa-Bautista, Jorge Enrique; Ramírez-Vélez, Robinson

    2015-07-01

    the body adiposity index (BAI) is a new simplistic method for predicting body fat percentage (BF%) via a simple equation of hip circumference to height. Up to now, few studies have evaluated the performance of BAI in determining excess fat in Colombians. The aim of this study was to evaluate the usefulness of BAI as a predictor of body fat in among Colombian adults. cross-sectional study carried out in a sample of 204 male belonging to the education sector from Bogotá, Colombia. BAI was calculated based on the equation reported in the Bergman et al. %BF determined by tetrapolar bioimpedance analysis (BIA) was used as the reference measure of adiposity. Bland-Altman analysis was used to assess the agreement between the two methods: BAI and BIA. Associations between anthropometric measures of adiposity were investigated by Pearson correlation analysis. in general pupulation, the BAI overestimates %BF (mean difference: 12.5 % [95%CI = -4.04 % to -21.02 %]), mainly at lower levels of adiposity (mean difference: 10.2 ± 3.3). Significant correlations were found between BAI and all measurements, being the strongest-moderate correlation with %BF (r = 0.777, p Colombian adults and has a tendency to provide overestimated values as BF% decreases. Therefore, this method can be a useful tool to predict %BF in Colombian adults, although it has some limitations. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  5. Neural network and wavelet average framing percentage energy for atrial fibrillation classification.

    Science.gov (United States)

    Daqrouq, K; Alkhateeb, A; Ajour, M N; Morfeq, A

    2014-03-01

    ECG signals are an important source of information in the diagnosis of atrial conduction pathology. Nevertheless, diagnosis by visual inspection is a difficult task. This work introduces a novel wavelet feature extraction method for atrial fibrillation derived from the average framing percentage energy (AFE) of terminal wavelet packet transform (WPT) sub signals. Probabilistic neural network (PNN) is used for classification. The presented method is shown to be a potentially effective discriminator in an automated diagnostic process. The ECG signals taken from the MIT-BIH database are used to classify different arrhythmias together with normal ECG. Several published methods were investigated for comparison. The best recognition rate selection was obtained for AFE. The classification performance achieved accuracy 97.92%. It was also suggested to analyze the presented system in an additive white Gaussian noise (AWGN) environment; 55.14% for 0dB and 92.53% for 5dB. It was concluded that the proposed approach of automating classification is worth pursuing with larger samples to validate and extend the present study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Influence of diethyl maleate in irradiated mice survival and related to percentages of serum proteins

    International Nuclear Information System (INIS)

    Bernardes, E.; Mastro, N.L. del

    1990-01-01

    The use of radiomodifying drugs that alter the radiation effect, protecting or sensitizing cells and organisms, presents great interest in tumor radiotherapy. Glutathione (GSH) can be described as the major endogenous radioprotector. The diethyl maleate (DEM) is a drug able to block intracellular GSH. This work aims at the establishment of the radiomodifying competence of DEM administered in two different vehicles, peanut oil and aqueous ethanolic solution by the analysis of mouse survival curves as well as the relative percentages of serum proteins. Groups of animals were previously injected intraperitoneally with 0.3 ml of 418 e 150 μM DEM respectively in each one of the vehicles one hour before irradiated with an 60 Co acute dose of 9 Gy. The survival of mice was followed during 30 days and electrophoretic profiles of serum proteins 1,3 and 7 days after irradiation. The results showed that the action of DEM om mouse radiosensitivity depends on the vehicles used, considering that both media showed a radio modifier action. (author)

  7. A low percentage of autologous serum can replace bovine serum to engineer human nasal cartilage

    Directory of Open Access Journals (Sweden)

    F Wolf

    2008-02-01

    Full Text Available For the generation of cell-based therapeutic products, it would be preferable to avoid the use of animal-derived components. Our study thus aimed at investigating the possibility to replace foetal bovine serum (FBS with autologous serum (AS for the engineering of cartilage grafts using expanded human nasal chondrocytes (HNC. HNC isolated from 7 donors were expanded in medium containing 10% FBS or AS at different concentrations (2%, 5% and 10% and cultured in pellets using serum-free medium or in Hyaff®-11 meshes using medium containing FBS or AS. Tissue forming capacity was assessed histologically (Safranin O, immunohistochemically (type II collagen and biochemically (glycosaminoglycans -GAG- and DNA. Differences among experimental groups were assessed by Mann Whitney tests. HNC expanded under the different serum conditions proliferated at comparable rates and generated cartilaginous pellets with similar histological appearance and amounts of GAG. Tissues generated by HNC from different donors cultured in Hyaff®-11 had variable quality, but the accumulated GAG amounts were comparable among the different serum conditions. Staining intensity for collagen type II was consistent with GAG deposition. Among the different serum conditions tested, the use of 2% AS resulted in the lowest variability in the GAG contents of generated tissues. In conclusion, a low percentage of AS can replace FBS both during the expansion and differentiation of HNC and reduce the variability in the quality of the resulting engineered cartilage tissues.

  8. Regularization method for solving the inverse scattering problem

    International Nuclear Information System (INIS)

    Denisov, A.M.; Krylov, A.S.

    1985-01-01

    The inverse scattering problem for the Schroedinger radial equation consisting in determining the potential according to the scattering phase is considered. The problem of potential restoration according to the phase specified with fixed error in a finite range is solved by the regularization method based on minimization of the Tikhonov's smoothing functional. The regularization method is used for solving the problem of neutron-proton potential restoration according to the scattering phases. The determined potentials are given in the table

  9. Viscous Regularization of the Euler Equations and Entropy Principles

    KAUST Repository

    Guermond, Jean-Luc

    2014-03-11

    This paper investigates a general class of viscous regularizations of the compressible Euler equations. A unique regularization is identified that is compatible with all the generalized entropies, à la [Harten et al., SIAM J. Numer. Anal., 35 (1998), pp. 2117-2127], and satisfies the minimum entropy principle. A connection with a recently proposed phenomenological model by [H. Brenner, Phys. A, 370 (2006), pp. 190-224] is made. © 2014 Society for Industrial and Applied Mathematics.

  10. Dimensional versus lattice regularization within Luescher's Yang Mills theory

    International Nuclear Information System (INIS)

    Diekmann, B.; Langer, M.; Schuette, D.

    1993-01-01

    It is pointed out that the coefficients of Luescher's effective model space Hamiltonian, which is based upon dimensional regularization techniques, can be reproduced by applying folded diagram perturbation theory to the Kogut Susskind Hamiltonian and by performing a lattice continuum limit (keeping the volume fixed). Alternative cutoff regularizations of the Hamiltonian are in general inconsistent, the critical point beeing the correct prediction for Luescher's tadpole coefficient which is formally quadratically divergent and which has to become a well defined (negative) number. (orig.)

  11. Left regular bands of groups of left quotients

    International Nuclear Information System (INIS)

    El-Qallali, A.

    1988-10-01

    A semigroup S which has a left regular band of groups as a semigroup of left quotients is shown to be the semigroup which is a left regular band of right reversible cancellative semigroups. An alternative characterization is provided by using spinned products. These results are applied to the case where S is a superabundant whose set of idempotents forms a left normal band. (author). 13 refs

  12. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  13. Estimation of the global regularity of a multifractional Brownian motion

    DEFF Research Database (Denmark)

    Lebovits, Joachim; Podolskij, Mark

    This paper presents a new estimator of the global regularity index of a multifractional Brownian motion. Our estimation method is based upon a ratio statistic, which compares the realized global quadratic variation of a multifractional Brownian motion at two different frequencies. We show that a ...... that a logarithmic transformation of this statistic converges in probability to the minimum of the Hurst functional parameter, which is, under weak assumptions, identical to the global regularity index of the path....

  14. Regularization of the quantum field theory of charges and monopoles

    International Nuclear Information System (INIS)

    Panagiotakopoulos, C.

    1981-09-01

    A gauge invariant regularization procedure for quantum field theories of electric and magnetic charges based on Zwanziger's local formulation is proposed. The bare regularized full Green's functions of gauge invariant operators are shown to be Lorentz invariant. This would have as a consequence the Lorentz invariance of the finite Green's functions that might result after any reasonable subtraction if such a subtraction can be found. (author)

  15. Borderline personality disorder and regularly drinking alcohol before sex.

    Science.gov (United States)

    Thompson, Ronald G; Eaton, Nicholas R; Hu, Mei-Chen; Hasin, Deborah S

    2017-07-01

    Drinking alcohol before sex increases the likelihood of engaging in unprotected intercourse, having multiple sexual partners and becoming infected with sexually transmitted infections. Borderline personality disorder (BPD), a complex psychiatric disorder characterised by pervasive instability in emotional regulation, self-image, interpersonal relationships and impulse control, is associated with substance use disorders and sexual risk behaviours. However, no study has examined the relationship between BPD and drinking alcohol before sex in the USA. This study examined the association between BPD and regularly drinking before sex in a nationally representative adult sample. Participants were 17 491 sexually active drinkers from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions. Logistic regression models estimated effects of BPD diagnosis, specific borderline diagnostic criteria and BPD criterion count on the likelihood of regularly (mostly or always) drinking alcohol before sex, adjusted for controls. Borderline personality disorder diagnosis doubled the odds of regularly drinking before sex [adjusted odds ratio (AOR) = 2.26; confidence interval (CI) = 1.63, 3.14]. Of nine diagnostic criteria, impulsivity in areas that are self-damaging remained a significant predictor of regularly drinking before sex (AOR = 1.82; CI = 1.42, 2.35). The odds of regularly drinking before sex increased by 20% for each endorsed criterion (AOR = 1.20; CI = 1.14, 1.27) DISCUSSION AND CONCLUSIONS: This is the first study to examine the relationship between BPD and regularly drinking alcohol before sex in the USA. Substance misuse treatment should assess regularly drinking before sex, particularly among patients with BPD, and BPD treatment should assess risk at the intersection of impulsivity, sexual behaviour and substance use. [Thompson Jr RG, Eaton NR, Hu M-C, Hasin DS Borderline personality disorder and regularly drinking alcohol

  16. The Impact of Computerization on Regular Employment (Japanese)

    OpenAIRE

    SUNADA Mitsuru; HIGUCHI Yoshio; ABE Masahiro

    2004-01-01

    This paper uses micro data from the Basic Survey of Japanese Business Structure and Activity to analyze the effects of companies' introduction of information and telecommunications technology on employment structures, especially regular versus non-regular employment. Firstly, examination of trends in the ratio of part-time workers recorded in the Basic Survey shows that part-time worker ratios in manufacturing firms are rising slightly, but that companies with a high proportion of part-timers...

  17. Analytic regularization of the Yukawa model at finite temperature

    International Nuclear Information System (INIS)

    Malbouisson, A.P.C.; Svaiter, N.F.; Svaiter, B.F.

    1996-07-01

    It is analysed the one-loop fermionic contribution for the scalar effective potential in the temperature dependent Yukawa model. Ir order to regularize the model a mix between dimensional and analytic regularization procedures is used. It is found a general expression for the fermionic contribution in arbitrary spacetime dimension. It is also found that in D = 3 this contribution is finite. (author). 19 refs

  18. The relationship between lifestyle regularity and subjective sleep quality

    Science.gov (United States)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.

  19. Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions

    International Nuclear Information System (INIS)

    Lin, Hongxia; Du, Lili

    2013-01-01

    In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263–74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier–Stokes equations Indiana Univ. Math. J. 57 2643–61; 2011 Gobal regularity criterion for the 3D Navier–Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919–32) for 3D incompressible Navier–Stokes equations. (paper)

  20. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    Science.gov (United States)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  1. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  2. Near-Regular Structure Discovery Using Linear Programming

    KAUST Repository

    Huang, Qixing

    2014-06-02

    Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.

  3. The Correlation of Sonographic Finding of Fatty Liver with Hematologic Examination and Body Fat Percentage

    International Nuclear Information System (INIS)

    Cheon, Hae Kyung; Lee, Tae Yong; Kim, Young Ran

    2009-01-01

    Ultrasonography has been used as a basic examination of a medical check up for prevention and diagnostics of diseases. Even the person who has no particular subjective symptoms can have a variety of diseases. Especially fatty liver is found in many cases. In this study, we tested 3582 persons who are in between the ages of 15 to 81 and observed that 1390 persons had fatty liver while 2192 persons are normal. We classified the grade of fatty liver and compared their life styles with the results of liver function test and BMI. The results are as follows. Ratio of the subjects who had a fatty liver is 38.8%. Male and female ratio was 46.2% and 24.2%. On the correlation among the fatty liver, the body mass index and the body fat, the average value of body mass index and body fat were significantly higher in the group of the fatty liver than in those of the normal liver. The influence of the related factor and the correlation on the fatty liver was shown that it was more related with the order of age, body mass index, triglyceride, ALT, body fat, sex, HDL-Cholesterol, LDL-Cholesterol, and GGT. The result of the ultrasonography carried out for the purpose of regular health check up indicates that even the 38.8% of those who was diagnosed as normal condition could have the fatty liver and have possibility of other diseases. Therefore, if there are any troubles related to liver function and lipid through hematologic examination or when practicing follow-up study with ultrasonography concerning the correlation relation between the body fat and dietary preference, alcohol consumption and exercise, the ultrasonography is definitely useful for prevention and treatment of diseases.

  4. The Correlation of Sonographic Finding of Fatty Liver with Hematologic Examination and Body Fat Percentage

    Energy Technology Data Exchange (ETDEWEB)

    Cheon, Hae Kyung [Dept. of Radiology, Sun General Hospital, Daejeon (Korea, Republic of); Lee, Tae Yong; Kim, Young Ran [Dept. of Preventive Medicine and Public Health College of Midicin, Chungnam National University, Daejeon (Korea, Republic of)

    2009-12-15

    Ultrasonography has been used as a basic examination of a medical check up for prevention and diagnostics of diseases. Even the person who has no particular subjective symptoms can have a variety of diseases. Especially fatty liver is found in many cases. In this study, we tested 3582 persons who are in between the ages of 15 to 81 and observed that 1390 persons had fatty liver while 2192 persons are normal. We classified the grade of fatty liver and compared their life styles with the results of liver function test and BMI. The results are as follows. Ratio of the subjects who had a fatty liver is 38.8%. Male and female ratio was 46.2% and 24.2%. On the correlation among the fatty liver, the body mass index and the body fat, the average value of body mass index and body fat were significantly higher in the group of the fatty liver than in those of the normal liver. The influence of the related factor and the correlation on the fatty liver was shown that it was more related with the order of age, body mass index, triglyceride, ALT, body fat, sex, HDL-Cholesterol, LDL-Cholesterol, and GGT. The result of the ultrasonography carried out for the purpose of regular health check up indicates that even the 38.8% of those who was diagnosed as normal condition could have the fatty liver and have possibility of other diseases. Therefore, if there are any troubles related to liver function and lipid through hematologic examination or when practicing follow-up study with ultrasonography concerning the correlation relation between the body fat and dietary preference, alcohol consumption and exercise, the ultrasonography is definitely useful for prevention and treatment of diseases.

  5. An estimation of the percentage of dose in intraoral radiology exams using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Bonzoumet, S.P.J.; Braz, D.; Lopes, R.T.; Anjos, M.J.; Universidade do Estado do Rio de Janeiro; Padilha, Lucas

    2005-01-01

    In this work we used the EGS4 code in a simulated study of dose percentage in intraoral examination to 10 energy range to 140 keV. The simulation was carried out on a model consisting of different geometry (cheek, tooth and mouth cavity) under normal incidence X-ray beam over the surface of the various simulated materials. It was observed that for energy smaller than 30 keV most of the energy is deposited on the cheek. In 30 keV there is a point of maximum radiation absorption in the tooth (approximately 60% of the energy of the incident radiation is deposited on the tooth) in relation to other simulated materials. It means that in this energy there is a better contrast in the radiographic image of the tooth and a smaller dose on the cheek. In 40 keV the deposited energy in the tooth is roughly equal to the energy that is transmitted (to the radiographic film or buccal cavity) causing a degradation in the radiographic image and/or a higher dose in the oral cavity. For energies above 40 keV, the amount of energy transmitted (to the oral cavity and/or radiographic film) is higher than the energy deposited in other materials, i.e, it only contributes to increasing of dose in the regions close to the oral cavity and the radiographic image degradation. These results can provide important information for radiological procedures applied in dentistry where the image quality is a relevant factor to a dental evaluation needs as well as reducing dose in the oral cavity.

  6. Bosch Reactor Development for High Percentage Oxygen Recovery from Carbon Dioxide

    Science.gov (United States)

    Howard, David; Abney, Morgan

    2015-01-01

    This next Generation Life Support Project entails the development and demonstration of Bosch reaction technologies to improve oxygen recovery from metabolically generated oxygen and/or space environments. A primary focus was placed on alternate carbon formation reactor concepts to improve useful catalyst life for space vehicle applications, and make use of in situ catalyst resources for non-terrestrial surface missions. Current state-of-the-art oxygen recovery systems onboard the International Space Station are able to effectively recover approximately 45 percent of the oxygen consumed by humans and exhausted in the form of carbon dioxide (CO2). Excess CO2 is vented overboard and the oxygen contained in the molecules is lost. For long-duration missions beyond the reaches of Earth for resupply, it will be necessary to recover greater amounts of constituents such as oxygen that are necessary for sustaining life. Bosch technologies theoretically recover 100 percent of the oxygen from CO2, producing pure carbon as the sole waste product. Challenges with this technology revolve around the carbon product fouling catalyst materials, drastically limiting catalyst life. This project successfully demonstrated techniques to extend catalyst surface area exposure times to improve catalyst life for vehicle applications, and demonstrated the use of Martian and lunar regolith as viable catalyst Bosch Reactor Development for High Percentage Oxygen Recovery From Carbon Dioxide materials for surface missions. The Bosch process generates carbon nanotube formation within the regolith, which has been shown to improve mechanical properties of building materials. Production of bricks from post reaction regolith for building and radiation shielding applications were also explored.

  7. Gender- and Gestational Age-Specific Body Fat Percentage at Birth.

    LENUS (Irish Health Repository)

    Hawkes, Colin P

    2011-08-08

    Background: There is increasing evidence that in utero growth has both immediate and far-reaching influence on health. Birth weight and length are used as surrogate measures of in utero growth. However, these measures poorly reflect neonatal adiposity. Air-displacement plethysmography has been validated for the measurement of body fat in the neonatal population. Objective: The goal of this study was to show the normal reference values of percentage body fat (%BF) in infants during the first 4 days of life. Methods: As part of a large population-based birth cohort study, fat mass, fat-free mass, and %BF were measured within the first 4 days of life using air-displacement plethsymography. Infants were grouped into gestational age and gender categories. Results: Of the 786 enrolled infants, fat mass, fat-free mass, and %BF were measured in 743 (94.5%) infants within the first 4 days of life. %BF increased significantly with gestational age. Mean (SD) %BF at 36 to 37 weeks\\' gestation was 8.9% (3.5%); at 38 to 39 weeks\\' gestation, 10.3% (4%); and at 40 to 41 weeks\\' gestation, 11.2% (4.3%) (P < .001). Female infants had significantly increased mean (SD) %BF at 38 to 39(11.1% [3.9%] vs 9.8% [3.9%]; P = .012) and at 40 to 41 (12.5% [4.4%] vs 10% [3.9%]; P < .001) weeks\\' gestation compared with male infants. Gender- and gestational age-specific centiles were calculated, and a normative table was generated for reference. Conclusion: %BF at birth is influenced by gestational age and gender. We generated accurate %BF centiles from a large population-based cohort.

  8. The Percentage of Body Fat in Children and the Level of their Motor Skills.

    Science.gov (United States)

    Prskalo, Ivan; Badrić, Marko; Kunješić, Mateja

    2015-07-01

    The aim of this study was to determine the prevalence of overweight and obesity among primary education pupils and to identify differences in motor skills between normal weight, excessive and obese pupils. Partial aim was to determine differences in motor status of girls and boys and their anthropometric characteristics (Body Mass Index, body fat percentage). The study was conducted in two primary schools in Zagreb, Ivan Goran Kovačić and Davorin Trstenjak. Total of 333 pupils, aged 7-11, were measured (178 boys and 155 girls). Four anthropometric and seven motor variables were used to analyze differences in motor abilities of children. Children were divided into three groups within gender based on their body fat measures. We established a statistically significant difference in motor abilities between groups of subjects in three subsamples (1st-2nd class girls and 3rd-4th boys and girls). Children with normal weight have better results in explosive strength, coordination, static strength of arm and shoulder than children who are overweight and obese. The differences are not observed in motor variables where body weight is not a requisite for efficient execution of movement. Differences in motor skills by gender showed that boys are better in coordination, speed of the simple movements, explosive and repetitive strength, and girls are better in flexibility. The conclusion of this study confirmed the existence of differences in the development of motor skills in children with normal body weight compared to children who are overweight or obese. These facts prove that excessive body weight has negative repercussions on motor performance.

  9. Clustering, randomness, and regularity in cloud fields. 4: Stratocumulus cloud fields

    Science.gov (United States)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-01-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (more than 900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  10. Clustering, randomness, and regularity in cloud fields. 4. Stratocumulus cloud fields

    Science.gov (United States)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-07-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (>900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  11. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  12. The neural substrates of impaired finger tapping regularity after stroke.

    Science.gov (United States)

    Calautti, Cinzia; Jones, P Simon; Guincestre, Jean-Yves; Naccarato, Marcello; Sharma, Nikhil; Day, Diana J; Carpenter, T Adrian; Warburton, Elizabeth A; Baron, Jean-Claude

    2010-03-01

    Not only finger tapping speed, but also tapping regularity can be impaired after stroke, contributing to reduced dexterity. The neural substrates of impaired tapping regularity after stroke are unknown. Previous work suggests damage to the dorsal premotor cortex (PMd) and prefrontal cortex (PFCx) affects externally-cued hand movement. We tested the hypothesis that these two areas are involved in impaired post-stroke tapping regularity. In 19 right-handed patients (15 men/4 women; age 45-80 years; purely subcortical in 16) partially to fully recovered from hemiparetic stroke, tri-axial accelerometric quantitative assessment of tapping regularity and BOLD fMRI were obtained during fixed-rate auditory-cued index-thumb tapping, in a single session 10-230 days after stroke. A strong random-effect correlation between tapping regularity index and fMRI signal was found in contralesional PMd such that the worse the regularity the stronger the activation. A significant correlation in the opposite direction was also present within contralesional PFCx. Both correlations were maintained if maximal index tapping speed, degree of paresis and time since stroke were added as potential confounds. Thus, the contralesional PMd and PFCx appear to be involved in the impaired ability of stroke patients to fingertap in pace with external cues. The findings for PMd are consistent with repetitive TMS investigations in stroke suggesting a role for this area in affected-hand movement timing. The inverse relationship with tapping regularity observed for the PFCx and the PMd suggests these two anatomically-connected areas negatively co-operate. These findings have implications for understanding the disruption and reorganization of the motor systems after stroke. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  13. Geographically Weighted Regression Model with Kernel Bisquare and Tricube Weighted Function on Poverty Percentage Data in Central Java Province

    Science.gov (United States)

    Nugroho, N. F. T. A.; Slamet, I.

    2018-05-01

    Poverty is a socio-economic condition of a person or group of people who can not fulfil their basic need to maintain and develop a dignified life. This problem still cannot be solved completely in Central Java Province. Currently, the percentage of poverty in Central Java is 13.32% which is higher than the national poverty rate which is 11.13%. In this research, data of percentage of poor people in Central Java Province has been analyzed through geographically weighted regression (GWR). The aim of this research is therefore to model poverty percentage data in Central Java Province using GWR with weighted function of kernel bisquare, and tricube. As the results, we obtained GWR model with bisquare and tricube kernel weighted function on poverty percentage data in Central Java province. From the GWR model, there are three categories of region which are influenced by different of significance factors.

  14. Effect of dietary protein level and length of fattening period on dressing percentage and carcass conformation in broiler chickens

    OpenAIRE

    Dosković, Vladimir; Bogosavljević-Bošković, Snežana; Škrbić, Zdenka; Đoković, Radojica; Rakonjac, Simeon; Petričević, Veselin

    2017-01-01

    This study analyses the effect of different protein levels in broiler feeds (supplemented with protease) and different lengths of fattening period on some parameters related to dressed carcass quality. Medium-growing Master Gris broiler chickens were used in a fattening trial lasting 63 days. At slaughter, dressing percentages and abdominal fat percentages were determined based on traditionally dressed carcass weights and abdominal fat weights of broilers at 49 and 63 days, and conformation i...

  15. Prediction of whole-body fat percentage and visceral adipose tissue mass from five anthropometric variables.

    Directory of Open Access Journals (Sweden)

    Michelle G Swainson

    Full Text Available The conventional measurement of obesity utilises the body mass index (BMI criterion. Although there are benefits to this method, there is concern that not all individuals at risk of obesity-associated medical conditions are being identified. Whole-body fat percentage (%FM, and specifically visceral adipose tissue (VAT mass, are correlated with and potentially implicated in disease trajectories, but are not fully accounted for through BMI evaluation. The aims of this study were (a to compare five anthropometric predictors of %FM and VAT mass, and (b to explore new cut-points for the best of these predictors to improve the characterisation of obesity.BMI, waist circumference (WC, waist-to-hip ratio (WHR, waist-to-height ratio (WHtR and waist/height0.5 (WHT.5R were measured and calculated for 81 adults (40 women, 41 men; mean (SD age: 38.4 (17.5 years; 94% Caucasian. Total body dual energy X-ray absorptiometry with Corescan (GE Lunar iDXA, Encore version 15.0 was also performed to quantify %FM and VAT mass. Linear regression analysis, stratified by sex, was applied to predict both %FM and VAT mass for each anthropometric variable. Within each sex, we used information theoretic methods (Akaike Information Criterion; AIC to compare models. For the best anthropometric predictor, we derived tentative cut-points for classifying individuals as obese (>25% FM for men or >35% FM for women, or > highest tertile for VAT mass.The best predictor of both %FM and VAT mass in men and women was WHtR. Derived cut-points for predicting whole body obesity were 0.53 in men and 0.54 in women. The cut-point for predicting visceral obesity was 0.59 in both sexes.In the absence of more objective measures of central obesity and adiposity, WHtR is a suitable proxy measure in both women and men. The proposed DXA-%FM and VAT mass cut-offs require validation in larger studies, but offer potential for improvement of obesity characterisation and the identification of individuals

  16. Comparison of percentage excess weight loss after laparoscopic sleeve gastrectomy and laparoscopic adjustable gastric banding

    Science.gov (United States)

    Bobowicz, Maciej; Lech, Paweł; Orłowski, Michał; Siczewski, Wiaczesław; Pawlak, Maciej; Świetlik, Dariusz; Witzling, Mieczysław; Michalik, Maciej

    2014-01-01

    Introduction Laparoscopic sleeve gastrectomy (LSG) and laparoscopic adjustable gastric banding (LAGB) are acceptable options for primary bariatric procedures in patients with body mass index (BMI) 35–55 kg/m2. Aim The aim of this study is to compare the effects of these two bariatric procedures 6, 12 and 24 months after surgery. Material and methods Two hundred and two patients were included 72 LSG and 130 LAGB patients. The average age was 38.8 ±11.9 and 39.4 ±10.4 years in LSG and LAGB groups, with initial BMI of 44.1 kg/m2 and 45.2 kg/m2, p = NS. Results The mean percentage of excess weight loss (%EWL) at 6 months for LSG vs. LAGB was 36.3% vs. 30.1% (p = 0.01) and at 12 months was 43.8% vs. 34.6% (p = 0.005). The greatest difference in the mean %EWL at 12 months was observed in patients with initial BMI of 40–49.9 kg/m2 in favor of LSG (47.5% vs. 35.6%; p = 0.01). Two years after surgery there was no advantage of LSG and in the subgroup of patients with BMI 50–55 kg/m2 there was a trend in favor of LAGB (57.2% vs. 30%; p = 0.07). The multiple regression model of independent variables (age, gender, initial BMI and the presence of comorbidities) proved insignificant in prediction of the best outcome in means of %EWL for either operative modality. None of these factors in the logistic regression model could determine the type of surgery that should be used in particular patients. Conclusions During the first 2 years after surgery, the best results were obtained in women with lower BMI undergoing LSG surgery. The LSG provides greater %EWL after a shorter period of time though the difference decreases in time. PMID:25337157

  17. Reducing errors in the GRACE gravity solutions using regularization

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  18. Rotating Hayward’s regular black hole as particle accelerator

    International Nuclear Information System (INIS)

    Amir, Muhammed; Ghosh, Sushant G.

    2015-01-01

    Recently, Bañados, Silk and West (BSW) demonstrated that the extremal Kerr black hole can act as a particle accelerator with arbitrarily high center-of-mass energy (E CM ) when the collision takes place near the horizon. The rotating Hayward’s regular black hole, apart from Mass (M) and angular momentum (a), has a new parameter g (g>0 is a constant) that provides a deviation from the Kerr black hole. We demonstrate that for each g, with M=1, there exist critical a E and r H E , which corresponds to a regular extremal black hole with degenerate horizons, and a E decreases whereas r H E increases with increase in g. While aregular non-extremal black hole with outer and inner horizons. We apply the BSW process to the rotating Hayward’s regular black hole, for different g, and demonstrate numerically that the E CM diverges in the vicinity of the horizon for the extremal cases thereby suggesting that a rotating regular black hole can also act as a particle accelerator and thus in turn provide a suitable framework for Plank-scale physics. For a non-extremal case, there always exist a finite upper bound for the E CM , which increases with the deviation parameter g.

  19. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  20. Method of transferring regular shaped vessel into cell

    International Nuclear Information System (INIS)

    Murai, Tsunehiko.

    1997-01-01

    The present invention concerns a method of transferring regular shaped vessels from a non-contaminated area to a contaminated cell. A passage hole for allowing the regular shaped vessels to pass in the longitudinal direction is formed to a partitioning wall at the bottom of the contaminated cell. A plurality of regular shaped vessel are stacked in multiple stages in a vertical direction from the non-contaminated area present below the passage hole, allowed to pass while being urged and transferred successively into the contaminated cell. As a result, since they are transferred while substantially closing the passage hole by the regular shaped vessels, radiation rays or contaminated materials are prevented from discharging from the contaminated cell to the non-contaminated area. Since there is no requirement to open/close an isolation door frequently, the workability upon transfer can be improved remarkably. In addition, the sealing member for sealing the gap between the regular shaped vessel passing through the passage hole and the partitioning wall of the bottom is disposed to the passage hole, the contaminated materials in the contaminated cells can be prevented from discharging from the gap to the non-contaminated area. (N.H.)

  1. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  2. On the MSE Performance and Optimization of Regularized Problems

    KAUST Repository

    Alrashdi, Ayed

    2016-11-01

    The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.

  3. The effect of insulin resistance and exercise on the percentage of CD16(+) monocyte subset in obese individuals.

    Science.gov (United States)

    de Matos, Mariana A; Duarte, Tamiris C; Ottone, Vinícius de O; Sampaio, Pâmela F da M; Costa, Karine B; de Oliveira, Marcos F Andrade; Moseley, Pope L; Schneider, Suzanne M; Coimbra, Cândido C; Brito-Melo, Gustavo E A; Magalhães, Flávio de C; Amorim, Fabiano T; Rocha-Vieira, Etel

    2016-06-01

    Obesity is a low-grade chronic inflammation condition, and macrophages, and possibly monocytes, are involved in the pathological outcomes of obesity. Physical exercise is a low-cost strategy to prevent and treat obesity, probably because of its anti-inflammatory action. We evaluated the percentage of CD16(-) and CD16(+) monocyte subsets in obese insulin-resistant individuals and the effect of an exercise bout on the percentage of these cells. Twenty-seven volunteers were divided into three experimental groups: lean insulin sensitive, obese insulin sensitive and obese insulin resistant. Venous blood samples collected before and 1 h after an aerobic exercise session on a cycle ergometer were used for determination of monocyte subsets by flow cytometry. Insulin-resistant obese individuals have a higher percentage of CD16(+) monocytes (14.8 ± 2.4%) than the lean group (10.0 ± 1.3%). A positive correlation of the percentage of CD16(+) monocytes with body mass index and fasting plasma insulin levels was found. One bout of moderate exercise reduced the percentage of CD16(+) monocytes by 10% in all the groups evaluated. Also, the absolute monocyte count, as well as all other leukocyte populations, in lean and obese individuals, increased after exercise. This fact may partially account for the observed reduction in the percentage of CD16(+) cells in response to exercise. Insulin-resistant, but not insulin-sensitive obese individuals, have an increased percentage of CD16(+) monocytes that can be slightly modulated by a single bout of moderate aerobic exercise. These findings may be clinically relevant to the population studied, considering the involvement of CD16(+) monocytes in the pathophysiology of obesity. Copyright © 2016 John Wiley & Sons, Ltd. Obesity is now considered to be an inflammatory condition associated with many pathological consequences, including insulin resistance. It is proposed that insulin resistance contributes to the aggravation of the

  4. The relationship between synchronization and percolation for regular networks

    Science.gov (United States)

    Li, Zhe; Ren, Tao; Xu, Yanjie; Jin, Jianyu

    2018-02-01

    Synchronization and percolation are two essential phenomena in complex dynamical networks. They have been studied widely, but previously treated as unrelated. In this paper, the relationship between synchronization and percolation are revealed for regular networks. Firstly, we discovered a bridge between synchronization and percolation by using the eigenvalues of the Laplacian matrix to describe the synchronizability and using the eigenvalues of the adjacency matrix to describe the percolation threshold. Then, we proposed a method to find the relationship for regular networks based on the topology of networks. Particularly, if the degree distribution of the network is subject to delta function, we show that only the eigenvalues of the adjacency matrix need to be calculated. Finally, several examples are provided to demonstrate how to apply our proposed method to discover the relationship between synchronization and percolation for regular networks.

  5. Variational analysis of regular mappings theory and applications

    CERN Document Server

    Ioffe, Alexander D

    2017-01-01

    This monograph offers the first systematic account of (metric) regularity theory in variational analysis. It presents new developments alongside classical results and demonstrates the power of the theory through applications to various problems in analysis and optimization theory. The origins of metric regularity theory can be traced back to a series of fundamental ideas and results of nonlinear functional analysis and global analysis centered around problems of existence and stability of solutions of nonlinear equations. In variational analysis, regularity theory goes far beyond the classical setting and is also concerned with non-differentiable and multi-valued operators. The present volume explores all basic aspects of the theory, from the most general problems for mappings between metric spaces to those connected with fairly concrete and important classes of operators acting in Banach and finite dimensional spaces. Written by a leading expert in the field, the book covers new and powerful techniques, whic...

  6. Variational regularization of 3D data experiments with Matlab

    CERN Document Server

    Montegranario, Hebert

    2014-01-01

    Variational Regularization of 3D Data provides an introduction to variational methods for data modelling and its application in computer vision. In this book, the authors identify interpolation as an inverse problem that can be solved by Tikhonov regularization. The proposed solutions are generalizations of one-dimensional splines, applicable to n-dimensional data and the central idea is that these splines can be obtained by regularization theory using a trade-off between the fidelity of the data and smoothness properties.As a foundation, the authors present a comprehensive guide to the necessary fundamentals of functional analysis and variational calculus, as well as splines. The implementation and numerical experiments are illustrated using MATLAB®. The book also includes the necessary theoretical background for approximation methods and some details of the computer implementation of the algorithms. A working knowledge of multivariable calculus and basic vector and matrix methods should serve as an adequat...

  7. Wavelet domain image restoration with adaptive edge-preserving regularization.

    Science.gov (United States)

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  8. Breast ultrasound tomography with total-variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Laboratory; Li, Cuiping [KARMANOS CANCER INSTIT.; Duric, Neb [KARMANOS CANCER INSTIT

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  9. Manufacture of Regularly Shaped Sol-Gel Pellets

    Science.gov (United States)

    Leventis, Nicholas; Johnston, James C.; Kinder, James D.

    2006-01-01

    An extrusion batch process for manufacturing regularly shaped sol-gel pellets has been devised as an improved alternative to a spray process that yields irregularly shaped pellets. The aspect ratio of regularly shaped pellets can be controlled more easily, while regularly shaped pellets pack more efficiently. In the extrusion process, a wet gel is pushed out of a mold and chopped repetitively into short, cylindrical pieces as it emerges from the mold. The pieces are collected and can be either (1) dried at ambient pressure to xerogel, (2) solvent exchanged and dried under ambient pressure to ambigels, or (3) supercritically dried to aerogel. Advantageously, the extruded pellets can be dropped directly in a cross-linking bath, where they develop a conformal polymer coating around the skeletal framework of the wet gel via reaction with the cross linker. These pellets can be dried to mechanically robust X-Aerogel.

  10. Gamma regularization based reconstruction for low dose CT

    International Nuclear Information System (INIS)

    Zhang, Junfeng; Chen, Yang; Hu, Yining; Luo, Limin; Shu, Huazhong; Li, Bicao; Liu, Jin; Coatrieux, Jean-Louis

    2015-01-01

    Reducing the radiation in computerized tomography is today a major concern in radiology. Low dose computerized tomography (LDCT) offers a sound way to deal with this problem. However, more severe noise in the reconstructed CT images is observed under low dose scan protocols (e.g. lowered tube current or voltage values). In this paper we propose a Gamma regularization based algorithm for LDCT image reconstruction. This solution is flexible and provides a good balance between the regularizations based on l 0 -norm and l 1 -norm. We evaluate the proposed approach using the projection data from simulated phantoms and scanned Catphan phantoms. Qualitative and quantitative results show that the Gamma regularization based reconstruction can perform better in both edge-preserving and noise suppression when compared with other norms. (paper)

  11. Further investigation on "A multiplicative regularization for force reconstruction"

    Science.gov (United States)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  12. Structural characterization of the packings of granular regular polygons.

    Science.gov (United States)

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  13. Manifestly scale-invariant regularization and quantum effective operators

    CERN Document Server

    Ghilencea, D.M.

    2016-01-01

    Scale invariant theories are often used to address the hierarchy problem, however the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which break this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale invariant regularization in (classical) scale invariant theories. We use a dilaton-dependent subtraction function $\\mu(\\sigma)$ which after spontaneous breaking of scale symmetry generates the usual DR subtraction scale $\\mu(\\langle\\sigma\\rangle)$. One consequence is that "evanescent" interactions generated by scale invariance of the action in $d=4-2\\epsilon$ (but vanishing in $d=4$), give rise to new, finite quantum corrections. We find a (finite) correction $\\Delta U(\\phi,\\sigma)$ to the one-loop scalar potential for $\\phi$ and $\\sigma$, beyond the Coleman-Weinberg term. $\\Delta U$ is due to an evanescent correction ($\\propto\\epsilon$) to the field-dependent masses (of...

  14. Low-rank matrix approximation with manifold regularization.

    Science.gov (United States)

    Zhang, Zhenyue; Zhao, Keke

    2013-07-01

    This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.

  15. Selecting protein families for environmental features based on manifold regularization.

    Science.gov (United States)

    Jiang, Xingpeng; Xu, Weiwei; Park, E K; Li, Guangrong

    2014-06-01

    Recently, statistics and machine learning have been developed to identify functional or taxonomic features of environmental features or physiological status. Important proteins (or other functional and taxonomic entities) to environmental features can be potentially used as biosensors. A major challenge is how the distribution of protein and gene functions embodies the adaption of microbial communities across environments and host habitats. In this paper, we propose a novel regularization method for linear regression to adapt the challenge. The approach is inspired by local linear embedding (LLE) and we call it a manifold-constrained regularization for linear regression (McRe). The novel regularization procedure also has potential to be used in solving other linear systems. We demonstrate the efficiency and the performance of the approach in both simulation and real data.

  16. Processing SPARQL queries with regular expressions in RDF databases

    Science.gov (United States)

    2011-01-01

    Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225

  17. Processing SPARQL queries with regular expressions in RDF databases

    Directory of Open Access Journals (Sweden)

    Cho Hune

    2011-03-01

    Full Text Available Abstract Background As the Resource Description Framework (RDF data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf or Bio2RDF (bio2rdf.org, SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1 We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2 We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3 We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  18. Processing SPARQL queries with regular expressions in RDF databases.

    Science.gov (United States)

    Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon

    2011-03-29

    As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  19. REGULARIZED D-BAR METHOD FOR THE INVERSE CONDUCTIVITY PROBLEM

    DEFF Research Database (Denmark)

    Knudsen, Kim; Lassas, Matti; Mueller, Jennifer

    2009-01-01

    A strategy for regularizing the inversion procedure for the two-dimensional D-bar reconstruction algorithm based on the global uniqueness proof of Nachman [Ann. Math. 143 (1996)] for the ill-posed inverse conductivity problem is presented. The strategy utilizes truncation of the boundary integral...... the convergence of the reconstructed conductivity to the true conductivity as the noise level tends to zero. The results provide a link between two traditions of inverse problems research: theory of regularization and inversion methods based on complex geometrical optics. Also, the procedure is a novel...

  20. Arithmetic properties of $\\ell$-regular overpartition pairs

    OpenAIRE

    NAIKA, MEGADAHALLI SIDDA MAHADEVA; SHIVASHANKAR, CHANDRAPPA

    2017-01-01

    In this paper, we investigate the arithmetic properties of $\\ell$-regular overpartition pairs. Let $\\overline{B}_{\\ell}(n)$ denote the number of $\\ell$-regular overpartition pairs of $n$. We will prove the number of Ramanujan-like congruences and infinite families of congruences modulo 3, 8, 16, 36, 48, 96 for $\\overline{B}_3(n)$ and modulo 3, 16, 64, 96 for $\\overline{B}_4(n)$. For example, we find that for all nonnegative integers $\\alpha$ and $n$, $\\overline{B}_{3}(3^{\\alpha}(3n+2))\\equiv ...