Regularization of Instantaneous Frequency Attribute Computations
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
Bejaei, M; Wiseman, K; Cheng, K M
2015-01-01
Consumers' interest in specialty eggs appears to be growing in Europe and North America. The objective of this research was to develop logistic regression models that utilise purchaser attributes and demographics to predict the probability of a consumer purchasing a specific type of table egg including regular (white and brown), non-caged (free-run, free-range and organic) or nutrient-enhanced eggs. These purchase prediction models, together with the purchasers' attributes, can be used to assess market opportunities of different egg types specifically in British Columbia (BC). An online survey was used to gather data for the models. A total of 702 completed questionnaires were submitted by BC residents. Selected independent variables included in the logistic regression to develop models for different egg types to predict the probability of a consumer purchasing a specific type of table egg. The variables used in the model accounted for 54% and 49% of variances in the purchase of regular and non-caged eggs, respectively. Research results indicate that consumers of different egg types exhibit a set of unique and statistically significant characteristics and/or demographics. For example, consumers of regular eggs were less educated, older, price sensitive, major chain store buyers, and store flyer users, and had lower awareness about different types of eggs and less concern regarding animal welfare issues. However, most of the non-caged egg consumers were less concerned about price, had higher awareness about different types of table eggs, purchased their eggs from local/organic grocery stores, farm gates or farmers markets, and they were more concerned about care and feeding of hens compared to consumers of other eggs types.
DEFF Research Database (Denmark)
Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.
1994-01-01
Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...
UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA
Directory of Open Access Journals (Sweden)
IONIŢĂ Elena
2015-06-01
Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.
Coordinate-invariant regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-01-01
A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc
International Nuclear Information System (INIS)
Weiler, T.
1981-01-01
An overview is presented of the attributes of gluons, deducible from experimental data. Particular attention is given to the photon-gluon fusion model of charm leptoproduction. The agreement with QCD and theoretical prejudice is qualitatively good
Liu, Ziqi; Smola, Alexander J.; Soska, Kyle; Wang, Yu-Xiang; Zheng, Qinghua; Zhou, Jun
2016-01-01
In this paper we describe an algorithm for estimating the provenance of hacks on websites. That is, given properties of sites and the temporal occurrence of attacks, we are able to attribute individual attacks to joint causes and vulnerabilities, as well as estimating the evolution of these vulnerabilities over time. Specifically, we use hazard regression with a time-varying additive hazard function parameterized in a generalized linear form. The activation coefficients on each feature are co...
van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime
2016-01-01
This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,
Nijholt, Antinus
1980-01-01
Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular
Regular Expression Pocket Reference
Stubblebine, Tony
2007-01-01
This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp
Architectural patterns and quality attributes interaction
Me, G.; Calero Munoz, C.; Lago, P.; Muccini, H.
2016-01-01
Architectural patterns and styles represent common solutions to recurrent problems. They encompass architectural knowledge about how to achieve holistic system quality. The relation between patterns (or styles) and quality attributes has been regularly addressed in the literature. However, there is
Regularization by External Variables
DEFF Research Database (Denmark)
Bossolini, Elena; Edwards, R.; Glendinning, P. A.
2016-01-01
Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...
Goyvaerts, Jan
2009-01-01
This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a
Regularities of Multifractal Measures
Indian Academy of Sciences (India)
First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...
Stochastic analytic regularization
International Nuclear Information System (INIS)
Alfaro, J.
1984-07-01
Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)
Sparse structure regularized ranking
Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin
2014-01-01
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse
Regular expression containment
DEFF Research Database (Denmark)
Henglein, Fritz; Nielsen, Lasse
2011-01-01
We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...
Supersymmetric dimensional regularization
International Nuclear Information System (INIS)
Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.
1980-01-01
There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Manifold Regularized Reinforcement Learning.
Li, Hongliang; Liu, Derong; Wang, Ding
2018-04-01
This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.
Diverse Regular Employees and Non-regular Employment (Japanese)
MORISHIMA Motohiro
2011-01-01
Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...
Sparse structure regularized ranking
Wang, Jim Jing-Yan
2014-04-17
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.
'Regular' and 'emergency' repair
International Nuclear Information System (INIS)
Luchnik, N.V.
1975-01-01
Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)
Regularization of divergent integrals
Felder, Giovanni; Kazhdan, David
2016-01-01
We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.
Regularizing portfolio optimization
International Nuclear Information System (INIS)
Still, Susanne; Kondor, Imre
2010-01-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regularizing portfolio optimization
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regular Single Valued Neutrosophic Hypergraphs
Directory of Open Access Journals (Sweden)
Muhammad Aslam Malik
2016-12-01
Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.
The geometry of continuum regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-03-01
This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations
DEFF Research Database (Denmark)
Nørtoft, Kamilla; Nordentoft, Helle Merete
professionals´ meetings with patients and relatives. In the paper we draw data from focus group discussions with interdisciplinary groups of health care professionals working in the area of care for older people. The video narratives used to initiate discussions are developed through ethnographic fieldwork...... in the homes of older people and in pedagogical institutions targeting older people. In the paper we look at the potentials and challenges in working with ethnographic video narratives as a pedagogical tool. Our findings indicate that the use of video narratives has the potential to expose the diversity...... focus on their own professional discipline and its tasks 2) stimulates collaborative learning when they discuss their different interpretations of the ethnographic video narratives and achieve a deeper understanding of each other’s work and their clients’ lifeworlds, which might lead to a better...
Annotation of Regular Polysemy
DEFF Research Database (Denmark)
Martinez Alonso, Hector
Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...
Regularity of Minimal Surfaces
Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht
2010-01-01
"Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t
Regularities of radiation heredity
International Nuclear Information System (INIS)
Skakov, M.K.; Melikhov, V.D.
2001-01-01
One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru
Effective field theory dimensional regularization
International Nuclear Information System (INIS)
Lehmann, Dirk; Prezeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed
Effective field theory dimensional regularization
Lehmann, Dirk; Prézeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.
2010-12-07
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...
Selection of regularization parameter for l1-regularized damage detection
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
Adaptive Regularization of Neural Classifiers
DEFF Research Database (Denmark)
Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai
1997-01-01
We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...
DEFF Research Database (Denmark)
Batz, M. B.; Doyle, M. P.; Morris, J. G.
2005-01-01
source responsible for illness. A wide variety of food attribution approaches and data are used around the world including the analysis of outbreak data, case-control studies, microbial subtyping and source tracking methods, and expert judgment, among others. The Food Safety Research Consortium sponsored......Identification and prioritization of effective food safety interventions require an understanding of the relationship between food and pathogen from farm to consumption. Critical to this cause is food attribution, the capacity to attribute cases of foodborne disease to the food vehicle or other...... the Food Attribution Data Workshop in October 2003 to discuss the virtues and limitations of these approaches and to identify future options for collecting food attribution data in the United States. We summarize workshop discussions and identify challenges that affect progress in this critical component...
The attribute measurement technique
International Nuclear Information System (INIS)
MacArthur, Duncan W.; Langner, Diana; Smith, Morag; Thron, Jonathan; Razinkov, Sergey; Livke, Alexander
2010-01-01
Any verification measurement performed on potentially classified nuclear material must satisfy two seemingly contradictory constraints. First and foremost, no classified information can be released. At the same time, the monitoring party must have confidence in the veracity of the measurement. An information barrier (IB) is included in the measurement system to protect the potentially classified information while allowing sufficient information transfer to occur for the monitoring party to gain confidence that the material being measured is consistent with the host's declarations, concerning that material. The attribute measurement technique incorporates an IB and addresses both concerns by measuring several attributes of the nuclear material and displaying unclassified results through green (indicating that the material does possess the specified attribute) and red (indicating that the material does not possess the specified attribute) lights. The attribute measurement technique has been implemented in the AVNG, an attribute measuring system described in other presentations at this conference. In this presentation, we will discuss four techniques used in the AVNG: (1) the 1B, (2) the attribute measurement technique, (3) the use of open and secure modes to increase confidence in the displayed results, and (4) the joint design as a method for addressing both host and monitor needs.
2010-09-02
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Quality Attribute Design Primitives
National Research Council Canada - National Science Library
Bass, Len
2000-01-01
This report focuses on the quality attribute aspects of mechanisms. An architectural mechanism is a 'structure whereby objects collaborate to provide some behavior that satisfies a requirement of the problem...
Continuum-regularized quantum gravity
International Nuclear Information System (INIS)
Chan Huesum; Halpern, M.B.
1987-01-01
The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)
New regular black hole solutions
International Nuclear Information System (INIS)
Lemos, Jose P. S.; Zanchin, Vilson T.
2011-01-01
In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.
Regular variation on measure chains
Czech Academy of Sciences Publication Activity Database
Řehák, Pavel; Vitovec, J.
2010-01-01
Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475
Manifold Regularized Correlation Object Tracking
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2017-01-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...
On geodesics in low regularity
Sämann, Clemens; Steinbauer, Roland
2018-02-01
We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Geometric continuum regularization of quantum field theory
International Nuclear Information System (INIS)
Halpern, M.B.
1989-01-01
An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs
Phenomenology and Meaning Attribution
African Journals Online (AJOL)
John Paley. (2017). Phenomenology as Qualitative Research: A Critical Analysis of Meaning Attribution. ... basic philosophical nature of phenomenological meaning and inquiry, and that he not ... In keeping with the title of my book, Researching. Lived Experience ...... a quantitative social science that can make generalizing.
Main designations and attributions
International Nuclear Information System (INIS)
2010-01-01
The chapter presents the main designations and attributions of the LNMRI - Brazilian National Laboratory of Metrology of Ionizing Radiation, the Cooperative Center in Radiation Protection and Medical Preparations for Accidents with Radiation; the Treaty for fully banning of nuclear tests and the Regional Center for Training of IAEA
Regularities of radiorace formation in yeasts
International Nuclear Information System (INIS)
Korogodin, V.I.; Bliznik, K.M.; Kapul'tsevich, Yu.G.; Petin, V.G.; Akademiya Meditsinskikh Nauk SSSR, Obninsk. Nauchno-Issledovatel'skij Inst. Meditsinskoj Radiologii)
1977-01-01
Two strains of diploid yeast, namely, Saccharomyces ellipsoides, Megri 139-B, isolated under natural conditions, and Saccharomyces cerevisiae 5a x 3Bα, heterozygous by genes ade 1 and ade 2, were exposed to γ-quanta of Co 60 . The content of cells-saltants forming colonies with changed morphology, that of the nonviable cells, cells that are respiration mutants, and cells-recombinants by gene ade 1 and ade 2, has been determined. A certain regularity has been revealed in the distribution among the colonies of cells of the four types mentioned above: the higher the content of cells of some one of the types, the higher that of the cells having other hereditary changes
Metric regularity and subdifferential calculus
International Nuclear Information System (INIS)
Ioffe, A D
2000-01-01
The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces
Manifold Regularized Correlation Object Tracking.
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2018-05-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.
Dimensional regularization in configuration space
International Nuclear Information System (INIS)
Bollini, C.G.; Giambiagi, J.J.
1995-09-01
Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs
Regular algebra and finite machines
Conway, John Horton
2012-01-01
World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg
Matrix regularization of 4-manifolds
Trzetrzelewski, M.
2012-01-01
We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...
International Nuclear Information System (INIS)
1987-01-01
The 24 lectures presented to the colloquium cover the following subject fields: (1) Behaviour of structural components exposed to fire; (2) Behaviour of building materials exposed to fire; (3) Thermal processes; (4) Safety related, theoretical studies. (PW) [de
Regularization of Nonmonotone Variational Inequalities
International Nuclear Information System (INIS)
Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.
2006-01-01
In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems
Lattice regularized chiral perturbation theory
International Nuclear Information System (INIS)
Borasoy, Bugra; Lewis, Randy; Ouimet, Pierre-Philippe A.
2004-01-01
Chiral perturbation theory can be defined and regularized on a spacetime lattice. A few motivations are discussed here, and an explicit lattice Lagrangian is reviewed. A particular aspect of the connection between lattice chiral perturbation theory and lattice QCD is explored through a study of the Wess-Zumino-Witten term
2011-01-20
... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630
Forcing absoluteness and regularity properties
Ikegami, D.
2010-01-01
For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.
Globals of Completely Regular Monoids
Institute of Scientific and Technical Information of China (English)
Wu Qian-qian; Gan Ai-ping; Du Xian-kun
2015-01-01
An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.
Fluid queues and regular variation
Boxma, O.J.
1996-01-01
This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even
Fluid queues and regular variation
O.J. Boxma (Onno)
1996-01-01
textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail
Empirical laws, regularity and necessity
Koningsveld, H.
1973-01-01
In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.
1 am referring especially to two well-known views, viz. the regularity and
Interval matrices: Regularity generates singularity
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Shary, S.P.
2018-01-01
Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016
Regularization in Matrix Relevance Learning
Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael
A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can
Regular and conformal regular cores for static and rotating solutions
Energy Technology Data Exchange (ETDEWEB)
Azreg-Aïnou, Mustapha
2014-03-07
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Regular and conformal regular cores for static and rotating solutions
International Nuclear Information System (INIS)
Azreg-Aïnou, Mustapha
2014-01-01
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Exploring Attribution Theory and Bias
Robinson, Jessica A.
2017-01-01
Courses: This activity can be used in a wide range of classes, including interpersonal communication, introduction to communication, and small group communication. Objectives: After completing this activity, students should be able to: (1) define attribution theory, personality attribution, situational attribution, and attribution bias; (2)…
Strictness Analysis for Attribute Grammars
DEFF Research Database (Denmark)
Rosendahl, Mads
1992-01-01
interpretation of attribute grammars. The framework is used to construct a strictness analysis for attribute grammars. Results of the analysis enable us to transform an attribute grammar such that attributes are evaluated during parsing, if possible. The analysis is proved correct by relating it to a fixpoint...... semantics for attribute grammars. An implementation of the analysis is discussed and some extensions to the analysis are mentioned....
Process attributes in bio-ontologies
Directory of Open Access Journals (Sweden)
Andrade André Q
2012-08-01
Full Text Available Abstract Background Biomedical processes can provide essential information about the (mal- functioning of an organism and are thus frequently represented in biomedical terminologies and ontologies, including the GO Biological Process branch. These processes often need to be described and categorised in terms of their attributes, such as rates or regularities. The adequate representation of such process attributes has been a contentious issue in bio-ontologies recently; and domain ontologies have correspondingly developed ad hoc workarounds that compromise interoperability and logical consistency. Results We present a design pattern for the representation of process attributes that is compatible with upper ontology frameworks such as BFO and BioTop. Our solution rests on two key tenets: firstly, that many of the sorts of process attributes which are biomedically interesting can be characterised by the ways that repeated parts of such processes constitute, in combination, an overall process; secondly, that entities for which a full logical definition can be assigned do not need to be treated as primitive within a formal ontology framework. We apply this approach to the challenge of modelling and automatically classifying examples of normal and abnormal rates and patterns of heart beating processes, and discuss the expressivity required in the underlying ontology representation language. We provide full definitions for process attributes at increasing levels of domain complexity. Conclusions We show that a logical definition of process attributes is feasible, though limited by the expressivity of DL languages so that the creation of primitives is still necessary. This finding may endorse current formal upper-ontology frameworks as a way of ensuring consistency, interoperability and clarity.
Energy functions for regularization algorithms
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Physical model of dimensional regularization
Energy Technology Data Exchange (ETDEWEB)
Schonfeld, Jonathan F.
2016-12-15
We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Regularized strings with extrinsic curvature
International Nuclear Information System (INIS)
Ambjoern, J.; Durhuus, B.
1987-07-01
We analyze models of discretized string theories, where the path integral over world sheet variables is regularized by summing over triangulated surfaces. The inclusion of curvature in the action is a necessity for the scaling of the string tension. We discuss the physical properties of models with extrinsic curvature terms in the action and show that the string tension vanishes at the critical point where the bare extrinsic curvature coupling tends to infinity. Similar results are derived for models with intrinsic curvature. (orig.)
Circuit complexity of regular languages
Czech Academy of Sciences Publication Activity Database
Koucký, Michal
2009-01-01
Roč. 45, č. 4 (2009), s. 865-879 ISSN 1432-4350 R&D Projects: GA ČR GP201/07/P276; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : regular languages * circuit complexity * upper and lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.726, year: 2009
General inverse problems for regular variation
DEFF Research Database (Denmark)
Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan
2014-01-01
Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...
Quality Attribute Techniques Framework
Chiam, Yin Kia; Zhu, Liming; Staples, Mark
The quality of software is achieved during its development. Development teams use various techniques to investigate, evaluate and control potential quality problems in their systems. These “Quality Attribute Techniques” target specific product qualities such as safety or security. This paper proposes a framework to capture important characteristics of these techniques. The framework is intended to support process tailoring, by facilitating the selection of techniques for inclusion into process models that target specific product qualities. We use risk management as a theory to accommodate techniques for many product qualities and lifecycle phases. Safety techniques have motivated the framework, and safety and performance techniques have been used to evaluate the framework. The evaluation demonstrates the ability of quality risk management to cover the development lifecycle and to accommodate two different product qualities. We identify advantages and limitations of the framework, and discuss future research on the framework.
DEFF Research Database (Denmark)
Sebald, Alexander Christopher
2010-01-01
, in turn, influence behavior. Dufwenberg and Kirchsteiger [Dufwenberg, M., Kirchsteiger, G., 2004. A theory of sequential reciprocity. Games Econ. Behav. 47 (2), 268-298] formalize this empirical finding in their ‘theory of sequential reciprocity'. This paper extends their analysis by moves of chance. More...... precisely, an extended framework is presented which allows for the analysis of strategic interactions of reciprocal agents in situations in which material outcomes also depend on chance. Moves of chance influence the attribution of responsibilities, people's perceptions about the (un)kindness of others and......, hence, their reciprocal behavior. Furthermore, with the help of two applications it is demonstrated how this framework can be used to explain experimental findings showing that people react very differently in outcomewise-identical situations depending on the moves of chance involved....
Network structure exploration in networks with node attributes
Chen, Yi; Wang, Xiaolong; Bu, Junzhao; Tang, Buzhou; Xiang, Xin
2016-05-01
Complex networks provide a powerful way to represent complex systems and have been widely studied during the past several years. One of the most important tasks of network analysis is to detect structures (also called structural regularities) embedded in networks by determining group number and group partition. Most of network structure exploration models only consider network links. However, in real world networks, nodes may have attributes that are useful for network structure exploration. In this paper, we propose a novel Bayesian nonparametric (BNP) model to explore structural regularities in networks with node attributes, called Bayesian nonparametric attribute (BNPA) model. This model does not only take full advantage of both links between nodes and node attributes for group partition via shared hidden variables, but also determine group number automatically via the Bayesian nonparametric theory. Experiments conducted on a number of real and synthetic networks show that our BNPA model is able to automatically explore structural regularities in networks with node attributes and is competitive with other state-of-the-art models.
Paranormal belief and attributional style.
Dudley, R T; Whisnand, E A
2000-06-01
52 college students completed Tobacyk's 1988 Revised Paranormal Belief Scale and Peterson, Semmel, von Baeyer, Abramson, Metalsky, and Seligman's 1982 Attributional Style Questionnaire. Analysis showed significantly higher depressive attributional styles among high scorers on paranormal phenomena than low scorers.
Regularized Statistical Analysis of Anatomy
DEFF Research Database (Denmark)
Sjöstrand, Karl
2007-01-01
This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....
Regularization methods in Banach spaces
Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S
2012-01-01
Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B
Academic Training Lecture - Regular Programme
PH Department
2011-01-01
Regular Lecture Programme 9 May 2011 ACT Lectures on Detectors - Inner Tracking Detectors by Pippa Wells (CERN) 10 May 2011 ACT Lectures on Detectors - Calorimeters (2/5) by Philippe Bloch (CERN) 11 May 2011 ACT Lectures on Detectors - Muon systems (3/5) by Kerstin Hoepfner (RWTH Aachen) 12 May 2011 ACT Lectures on Detectors - Particle Identification and Forward Detectors by Peter Krizan (University of Ljubljana and J. Stefan Institute, Ljubljana, Slovenia) 13 May 2011 ACT Lectures on Detectors - Trigger and Data Acquisition (5/5) by Dr. Brian Petersen (CERN) from 11:00 to 12:00 at CERN ( Bldg. 222-R-001 - Filtration Plant )
Olsen, Svein Ottar; Tuu, Ho Huu; Grunert, Klaus G
2017-10-01
The main purpose of this study is to identify consumer segments based on the importance of product attributes when buying seafood for homemade meals on weekdays. There is a particular focus on the relative importance of the packaging attributes of fresh seafood. The results are based on a representative survey of 840 Norwegian consumers between 18 and 80 years of age. This study found that taste, freshness, nutritional value and naturalness are the most important attributes for the home consumption of seafood. Except for the high importance of information about expiration date, most other packaging attributes have only medium importance. Three consumer segments are identified based on the importance of 33 attributes associated with seafood: Perfectionists, Quality Conscious and Careless Consumers. The Quality Conscious consumers feel more self-confident in their evaluation of quality, and are less concerned with packaging, branding, convenience and emotional benefits compared to the Perfectionists. Careless Consumers are important as regular consumers of convenient and pre-packed seafood products and value recipe information on the packaging. The seafood industry may use the results provided in this study to strengthen their positioning of seafood across three different consumer segments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cluster Based Vector Attribute Filtering
Kiwanuka, Fred N.; Wilkinson, Michael H.F.
2016-01-01
Morphological attribute filters operate on images based on properties or attributes of connected components. Until recently, attribute filtering was based on a single global threshold on a scalar property to remove or retain objects. A single threshold struggles in case no single property or
RES: Regularized Stochastic BFGS Algorithm
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Regularized Label Relaxation Linear Regression.
Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu
2018-04-01
Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.
Zhang, Min; Luo, Meifen; Nie, Rui; Zhang, Yan
2017-12-01
This paper aims to explore factors influencing the healthcare wearable technology adoption intention from perspectives of technical attributes (perceived convenience, perceived irreplaceability, perceived credibility and perceived usefulness), health attribute (health belief) and consumer attributes (consumer innovativeness, conspicuous consumption, informational reference group influence and gender difference). By integrating technology acceptance model, health belief model, snob effect and conformity and reference group theory, hypotheses and research model are proposed. The empirical investigation (N=436) collects research data through questionnaire. Results show that the adoption intention of healthcare wearable technology is influenced by technical attributes, health attribute and consumer attributes simultaneously. For technical attributes, perceived convenience and perceived credibility both positively affect perceived usefulness, and perceived usefulness influences adoption intention. The relation between perceived irreplaceability and perceived usefulness is only supported by males. For health attribute, health belief affects perceived usefulness for females. For consumer attributes, conspicuous consumption and informational reference group influence can significantly moderate the relation between perceived usefulness and adoption intention and the relation between consumer innovativeness and adoption intention respectively. What's more, consumer innovativeness significantly affects adoption intention for males. This paper aims to discuss technical attributes, health attribute and consumer attributes and their roles in the adoption intention of healthcare wearable technology. Findings may provide enlightenment to differentiate product developing and marketing strategies and provide some implications for clinical medicine. Copyright © 2017 Elsevier B.V. All rights reserved.
From inactive to regular jogger
DEFF Research Database (Denmark)
Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup
study was conducted using individual semi-structured interviews on how a successful long-term behavior change had been achieved. Ten informants were purposely selected from participants in the DANO-RUN research project (7 men, 3 women, average age 41.5). Interviews were performed on the basis of Theory...... of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...
Tessellating the Sphere with Regular Polygons
Soto-Johnson, Hortensia; Bechthold, Dawn
2004-01-01
Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.
On the equivalence of different regularization methods
International Nuclear Information System (INIS)
Brzezowski, S.
1985-01-01
The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)
The uniqueness of the regularization procedure
International Nuclear Information System (INIS)
Brzezowski, S.
1981-01-01
On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)
Application of Turchin's method of statistical regularization
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Regular extensions of some classes of grammars
Nijholt, Antinus
Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular
Preference mapping of lemon lime carbonated beverages with regular and diet beverage consumers.
Leksrisompong, P P; Lopetcharat, K; Guthrie, B; Drake, M A
2013-02-01
The drivers of liking of lemon-lime carbonated beverages were investigated with regular and diet beverage consumers. Ten beverages were selected from a category survey of commercial beverages using a D-optimal procedure. Beverages were subjected to consumer testing (n = 101 regular beverage consumers, n = 100 diet beverage consumers). Segmentation of consumers was performed on overall liking scores followed by external preference mapping of selected samples. Diet beverage consumers liked 2 diet beverages more than regular beverage consumers. There were no differences in the overall liking scores between diet and regular beverage consumers for other products except for a sparkling beverage sweetened with juice which was more liked by regular beverage consumers. Three subtle but distinct consumer preference clusters were identified. Two segments had evenly distributed diet and regular beverage consumers but one segment had a greater percentage of regular beverage consumers (P beverage consumers) did not have a large impact on carbonated beverage liking. Instead, mouthfeel attributes were major drivers of liking when these beverages were tested in a blind tasting. Preference mapping of lemon-lime carbonated beverage with diet and regular beverage consumers allowed the determination of drivers of liking of both populations. The understanding of how mouthfeel attributes, aromatics, and basic tastes impact liking or disliking of products was achieved. Preference drivers established in this study provide product developers of carbonated lemon-lime beverages with additional information to develop beverages that may be suitable for different groups of consumers. © 2013 Institute of Food Technologists®
Spanning Tree Based Attribute Clustering
DEFF Research Database (Denmark)
Zeng, Yifeng; Jorge, Cordero Hernandez
2009-01-01
Attribute clustering has been previously employed to detect statistical dependence between subsets of variables. We propose a novel attribute clustering algorithm motivated by research of complex networks, called the Star Discovery algorithm. The algorithm partitions and indirectly discards...... inconsistent edges from a maximum spanning tree by starting appropriate initial modes, therefore generating stable clusters. It discovers sound clusters through simple graph operations and achieves significant computational savings. We compare the Star Discovery algorithm against earlier attribute clustering...
Class of regular bouncing cosmologies
Vasilić, Milovan
2017-06-01
In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.
Analysis of Logic Programs Using Regular Tree Languages
DEFF Research Database (Denmark)
Gallagher, John Patrick
2012-01-01
The eld of nite tree automata provides fundamental notations and tools for reasoning about set of terms called regular or recognizable tree languages. We consider two kinds of analysis using regular tree languages, applied to logic programs. The rst approach is to try to discover automatically...... a tree automaton from a logic program, approximating its minimal Herbrand model. In this case the input for the analysis is a program, and the output is a tree automaton. The second approach is to expose or check properties of the program that can be expressed by a given tree automaton. The input...... to the analysis is a program and a tree automaton, and the output is an abstract model of the program. These two contrasting abstract interpretations can be used in a wide range of analysis and verication problems....
SOA: A Quality Attribute Perspective
2011-06-23
in software engineering from CMU. 6June 2011 Twitter #seiwebinar © 2011 Carnegie Mellon University Agenda Service -Oriented Architecture and... Software Architecture: Review Service -Orientation and Quality Attributes Summary and Future Challenges 7June 2011 Twitter #seiwebinar © 2011...Architecture and Software Architecture: Review Service -Orientation and Quality Attributes Summary and Future Challenges Review 10June 2011 Twitter
Semantic attributes based texture generation
Chi, Huifang; Gan, Yanhai; Qi, Lin; Dong, Junyu; Madessa, Amanuel Hirpa
2018-04-01
Semantic attributes are commonly used for texture description. They can be used to describe the information of a texture, such as patterns, textons, distributions, brightness, and so on. Generally speaking, semantic attributes are more concrete descriptors than perceptual features. Therefore, it is practical to generate texture images from semantic attributes. In this paper, we propose to generate high-quality texture images from semantic attributes. Over the last two decades, several works have been done on texture synthesis and generation. Most of them focusing on example-based texture synthesis and procedural texture generation. Semantic attributes based texture generation still deserves more devotion. Gan et al. proposed a useful joint model for perception driven texture generation. However, perceptual features are nonobjective spatial statistics used by humans to distinguish different textures in pre-attentive situations. To give more describing information about texture appearance, semantic attributes which are more in line with human description habits are desired. In this paper, we use sigmoid cross entropy loss in an auxiliary model to provide enough information for a generator. Consequently, the discriminator is released from the relatively intractable mission of figuring out the joint distribution of condition vectors and samples. To demonstrate the validity of our method, we compare our method to Gan et al.'s method on generating textures by designing experiments on PTD and DTD. All experimental results show that our model can generate textures from semantic attributes.
Fire exposed aluminium structures
Maljaars, J.; Fellinger, J.E.J.; Soetens, F.
2005-01-01
Material properties and mechanical response models for fire design of steel structures are based on extensive research and experience. Contrarily, the behaviour of aluminium load bearing structures exposed to fire is relatively unexplored. This article gives an overview of physical and mechanical
Adaptive regularization of noisy linear inverse problems
DEFF Research Database (Denmark)
Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue
2006-01-01
In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....
Higher derivative regularization and chiral anomaly
International Nuclear Information System (INIS)
Nagahama, Yoshinori.
1985-02-01
A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)
Regularity effect in prospective memory during aging
Directory of Open Access Journals (Sweden)
Geoffrey Blondelle
2016-10-01
Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical
Regularity effect in prospective memory during aging
Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique
2016-01-01
Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...
Abstract Interpretation and Attribute Gramars
DEFF Research Database (Denmark)
Rosendahl, Mads
The objective of this thesis is to explore the connections between abstract interpretation and attribute grammars as frameworks in program analysis. Abstract interpretation is a semantics-based program analysis method. A large class of data flow analysis problems can be expressed as non-standard ...... is presented in the thesis. Methods from abstract interpretation can also be used in correctness proofs of attribute grammars. This proof technique introduces a new class of attribute grammars based on domain theory. This method is illustrated with examples....
Regularization and error assignment to unfolded distributions
Zech, Gunter
2011-01-01
The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.
Iterative Regularization with Minimum-Residual Methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Iterative regularization with minimum-residual methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Quality attributes for mobile applications
Fernandes, João M.; Ferreira, André Leite
2016-01-01
A mobile application is a type of software application developed to run on a mobile device. The chapter discusses the main characteristics of mobile devices, since they have a great impact on mobile applications. It also presents the classification of mobile applications according to two main types: native and web-based applications. Finally, this chapter identifies the most relevant types of quality attributes for mobile applications. It shows that the relevant quality attributes for mobile ...
Belief attribution despite verbal interference.
Forgeot d'Arc, Baudouin; Ramus, Franck
2011-05-01
False-belief (FB) tasks have been widely used to study the ability of individuals to represent the content of their conspecifics' mental states (theory of mind). However, the cognitive processes involved are still poorly understood, and it remains particularly debated whether language and inner speech are necessary for the attribution of beliefs to other agents. We present a completely nonverbal paradigm consisting of silent animated cartoons in five closely related conditions, systematically teasing apart different aspects of scene analysis and allowing the assessment of the attribution of beliefs, goals, and physical causation. In order to test the role of language in belief attribution, we used verbal shadowing as a dual task to inhibit inner speech. Data on 58 healthy adults indicate that verbal interference decreases overall performance, but has no specific effect on belief attribution. Participants remained able to attribute beliefs despite heavy concurrent demands on their verbal abilities. Our results are most consistent with the hypothesis that belief attribution is independent from inner speech.
Hessian-regularized co-training for social activity recognition.
Liu, Weifeng; Li, Yang; Lin, Xu; Tao, Dacheng; Wang, Yanjiang
2014-01-01
Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA) dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms.
Hessian-regularized co-training for social activity recognition.
Directory of Open Access Journals (Sweden)
Weifeng Liu
Full Text Available Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms.
A regularized stationary mean-field game
Yang, Xianjin
2016-01-01
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
A regularized stationary mean-field game
Yang, Xianjin
2016-04-19
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
On infinite regular and chiral maps
Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán
2015-01-01
We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.
From recreational to regular drug use
DEFF Research Database (Denmark)
Järvinen, Margaretha; Ravn, Signe
2011-01-01
This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...
Automating InDesign with Regular Expressions
Kahrel, Peter
2006-01-01
If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.
Regularization modeling for large-eddy simulation
Geurts, Bernardus J.; Holm, D.D.
2003-01-01
A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of
2010-07-01
... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...
An iterative method for Tikhonov regularization with a general linear regularization operator
Hochstenbach, M.E.; Reichel, L.
2010-01-01
Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan
A Chance for Attributable Agency.
Briegel, Hans J; Müller, Thomas
Can we sensibly attribute some of the happenings in our world to the agency of some of the things around us? We do this all the time, but there are conceptual challenges purporting to show that attributable agency, and specifically one of its most important subspecies, human free agency, is incoherent. We address these challenges in a novel way: rather than merely rebutting specific arguments, we discuss a concrete model that we claim positively illustrates attributable agency in an indeterministic setting. The model, recently introduced by one of the authors in the context of artificial intelligence, shows that an agent with a sufficiently complex memory organization can employ indeterministic happenings in a meaningful way. We claim that these considerations successfully counter arguments against the coherence of libertarian (indeterminism-based) free will.
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan
2012-11-19
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-01-01
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Hierarchical regular small-world networks
International Nuclear Information System (INIS)
Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan
2008-01-01
Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Coupling regularizes individual units in noisy populations
International Nuclear Information System (INIS)
Ly Cheng; Ermentrout, G. Bard
2010-01-01
The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.
Multiple graph regularized protein domain ranking
Directory of Open Access Journals (Sweden)
Wang Jim
2012-11-01
Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Diagrammatic methods in phase-space regularization
International Nuclear Information System (INIS)
Bern, Z.; Halpern, M.B.; California Univ., Berkeley
1987-11-01
Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)
J-regular rings with injectivities
Shen, Liang
2010-01-01
A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.
Attribute Obfuscation with Gradient Reversal
Emmery, Chris; Manjavacas, Enrique; Chrupala, Grzegorz
2018-01-01
Recent advances in computational stylometry have demonstrated that automatically inferring quite an extensive set of personal attributes from text alone (e.g. gender, age, education, socio-economic status, mental health issues) is not only feasible, but can often rely on little supervision. This
DEFF Research Database (Denmark)
Nielson, Hanne Riis; Skyum, S.
1981-01-01
It is shown that any well-defined attribute grammar is k-visit for some k. Furthermore, it is shown that given a well-defined grammar G and an integer k, it is decidable whether G is k-visit. Finally it is shown that the k-visit grammars specify a proper hierarchy with respect to translations...
Morphosemantic Attributes of Meetei Proverbs
Singh, Lourembam Surjit
2015-01-01
This study proposes to investigate the functions of morphosemantic in Meetei proverbs, particularly the attribution of different meanings of the lexical items in Meetei Proverbial verbs. Meetei society has been using proverbs in the all ages, stages of development, social changes, and cultural diversifications to mark their wisdom of social…
Abstract Interpretation Using Attribute Grammar
DEFF Research Database (Denmark)
Rosendahl, Mads
1990-01-01
This paper deals with the correctness proofs of attribute grammars using methods from abstract interpretation. The technique will be described by defining a live-variable analysis for a small flow-chart language and proving it correct with respect to a continuation style semantics. The proof...
Generalized regular genus for manifolds with boundary
Directory of Open Access Journals (Sweden)
Paola Cristofori
2003-05-01
Full Text Available We introduce a generalization of the regular genus, a combinatorial invariant of PL manifolds ([10], which is proved to be strictly related, in dimension three, to generalized Heegaard splittings defined in [12].
Geometric regularizations and dual conifold transitions
International Nuclear Information System (INIS)
Landsteiner, Karl; Lazaroiu, Calin I.
2003-01-01
We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)
Fast and compact regular expression matching
DEFF Research Database (Denmark)
Bille, Philip; Farach-Colton, Martin
2008-01-01
We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....
Regular-fat dairy and human health
DEFF Research Database (Denmark)
Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas
2016-01-01
In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Regularities of intermediate adsorption complex relaxation
International Nuclear Information System (INIS)
Manukova, L.A.
1982-01-01
The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained
Online Manifold Regularization by Dual Ascending Procedure
Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui
2013-01-01
We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...
Access to serviced land for the urban poor: the regularization paradox in Mexico
Directory of Open Access Journals (Sweden)
Alfonso Iracheta Cenecorta
2000-01-01
Full Text Available The insufficient supply of serviced land at affordable prices for the urban poor and the need for regularization of the consequent illegal occupations in urban areas are two of the most important issues on the Latin American land policy agenda. Taking a structural/integrated view on the functioning of the urban land market in Latin America, this paper discusses the nexus between the formal and the informal land markets. It thus exposes the perverse feedback effects that curative regularization policies may have on the process by which irregularity is produced in the first place. The paper suggests that a more effective approach to the provision of serviced land for the poor cannot be resolved within the prevailing (curative regularization programs. These programs should have the capacity to mobilize the resources that do exist into a comprehensive program that links regularization with fiscal policy, including the exploration of value capture mechanisms.
Attribution In Influence: Relative Power And The Use Of Attribution
2017-12-01
word of mouth to mass media and motion pictures. The JUSPAO messaging effort operated under five specified PSYOP Objectives: 1.) To impress upon the...in railroad cars and stations, in taverns, and by word of mouth .”53 Because carrying a large number of printed products was cumbersome for an...campaign through social media to support their actions. Deeper investigation of their efforts exposed “troll armies” made up of paid or bot personas
Organizational Attributes, Market Growth, and Product Innovation
Song, Michael; Chen, Yan
2014-01-01
Extensive research has shown that organizational attributes affect product innovation. Extending this literature, this article delimits two general categories of organizational attributes and relates them to product innovation. Organizational attributes can be either control oriented or flexibility
Graded effects of regularity in language revealed by N400 indices of morphological priming.
Kielar, Aneta; Joanisse, Marc F
2010-07-01
Differential electrophysiological effects for regular and irregular linguistic forms have been used to support the theory that grammatical rules are encoded using a dedicated cognitive mechanism. The alternative hypothesis is that language systematicities are encoded probabilistically in a way that does not categorically distinguish rule-like and irregular forms. In the present study, this matter was investigated more closely by focusing specifically on whether the regular-irregular distinction in English past tenses is categorical or graded. We compared the ERP priming effects of regulars (baked-bake), vowel-change irregulars (sang-sing), and "suffixed" irregulars that display a partial regularity (suffixed irregular verbs, e.g., slept-sleep), as well as forms that are related strictly along formal or semantic dimensions. Participants performed a visual lexical decision task with either visual (Experiment 1) or auditory prime (Experiment 2). Stronger N400 priming effects were observed for regular than vowel-change irregular verbs, whereas suffixed irregulars tended to group with regular verbs. Subsequent analyses decomposed early versus late-going N400 priming, and suggested that differences among forms can be attributed to the orthographic similarity of prime and target. Effects of morphological relatedness were observed in the later-going time period, however, we failed to observe true regular-irregular dissociations in either experiment. The results indicate that morphological effects emerge from the interaction of orthographic, phonological, and semantic overlap between words.
Attribution style, theory and empirical findings
Krohn, Charlotte
2017-01-01
Master i læring i komplekse systemer Attribution theory is a long-standing and widely discussed theory that addresses individuals’ explanation of causes of events. People attribute events of success and failure individually. Previous studies indicate that performance in sporting events may be improved by changing individuals’ attribution style. Article one describes attribution and attribution theory as state of the art. The article addresses the most important findings within attribution ...
Disease proportions attributable to environment
Directory of Open Access Journals (Sweden)
Vineis Paolo
2007-11-01
Full Text Available Abstract Population disease proportions attributable to various causal agents are popular as they present a simplified view of the contribution of each agent to the disease load. However they are only summary figures that may be easily misinterpreted or over-interpreted even when the causal link between an exposure and an effect is well established. This commentary discusses several issues surrounding the estimation of attributable proportions, particularly with reference to environmental causes of cancers, and critically examines two recently published papers. These issues encompass potential biases as well as the very definition of environment and of environmental agent. The latter aspect is not just a semantic question but carries implications for the focus of preventive actions, whether centred on the material and social environment or on single individuals.
Mathematicians, Attributional Complexity, and Gender
Stalder, Daniel R.
Given indirect indications in sex role and soda! psychology research that mathematical-deductive reasoning may negatively relate to social acuity, Study 1 investigated whether mathematicians were less attributionally complex than nonmathematicians. Study 1 administered the Attributional Complexity Scale, a measure of social acuity, to female and male faculty members and graduate students in four Midwestern schools. Atlrihutional complexity (AC) is the ability and motivation to give complex explanations for behavior. Study 1 found a significant interaction between field and gender. Only among women did mathematicians score lower on AC. In addition, an established gender difference in AC (that women score higher than men) was present only among nonmathematicians. Studies 2 and 3 offered some preliminary support for the possibility that it is generally female students who score tow on AC who aspire to he mathematicians and for the underlying view that female students' perceived similarity to mathematicians can influence their vocational choices.
Improvements in GRACE Gravity Fields Using Regularization
Save, H.; Bettadpur, S.; Tapley, B. D.
2008-12-01
The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or
Directory of Open Access Journals (Sweden)
Tomomi Saeki
2006-04-01
Full Text Available This research used a questionnaire survey to explore the relationship between pupils’ attribution of their perceived mathematics performance and their affective attitudes towards mathematics learning as promoted by the different teaching methods they were exposed to in their mathematics classes. Both 5th and 8th graders attributed their success in learning mathematics to effort, although support from the teacher and support at home were also perceived as important factors in their success. The 5th graders and 8th graders overall gave effort-based attributions in the case of failure, while for 5th graders, ability was regarded as being as important as effort, in attributing failure in mathematics learning. Pupils who attributed their success in mathematics learning to effort, support at school and home, preferred teacher explanation and reading a textbook as learning strategies, while those attributing it to their ability preferred Individual work. Where pupils attributed success to luck, this seemed to have a negative effect on their affective attitudes towards mathematics learning as promoted by different teaching methods, while attributing failure to luck seemed to have positive effect. Attributing failure to poor teaching seemed to have a negative effect on their perception of teacher explanation. The relationships between pupil effort or ability based attributions of failure and their preference for different teaching methods were not clear. Adopting various teaching methods in mathematics classes would seem to support pupils who have different attribution styles.
Temporal context for authorship attribution
DEFF Research Database (Denmark)
Hansen, Niels Dalum; Lioma, Christina; Larsen, Birger
2014-01-01
A study of temporal aspects of authorship attribution - a task which aims to distinguish automatically between texts written by different authors by measuring textual features. This task is important in a number of areas, including plagiarism detection in secondary education, which we study...... world data from Danish secondary school students show 84% prediction accuracy when using all available material and 71.9% prediction accuracy when using only the five most recent writing samples from each student....
Attribution methodologies for mobility impacts
KOTELNIKOVA WEILER, Natalia; LEURENT, Fabien; POULHES, Alexis
2016-01-01
Motorized transportation modes all consume energy and emit local pollutants ? chemical and noise. Congestion can also be considered as a local pollution caused by some emitters onto some receivers. Various methods have been designed to evaluate impacts and relate them to emitters and/or receivers. Called ?attribution? in environmental evaluation or ?imputation? in economic analysis, these schemes? purpose is to identify the causes of impacts and to design management or compensation schemes to...
Zirconium-barrier cladding attributes
International Nuclear Information System (INIS)
Rosenbaum, H.S.; Rand, R.A.; Tucker, R.P.; Cheng, B.; Adamson, R.B.; Davies, J.H.; Armijo, J.S.; Wisner, S.B.
1987-01-01
This metallurgical study of Zr-barrier fuel cladding evaluates the importance of three salient attributes: (1) metallurgical bond between the zirconium liner and the Zircaloy substrate, (2) liner thickness (roughly 10% of the total cladding wall), and (3) softness (purity). The effect that each of these attributes has on the pellet-cladding interaction (PCI) resistance of the Zr-barrier fuel was studied by a combination of analytical model calculations and laboratory experiments using an expanding mandrel technique. Each of the attributes is shown to contribute to PCI resistance. The effect of the zirconium liner on fuel behavior during off-normal events in which steam comes in contact with the zirconium surface was studied experimentally. Simulations of loss-of-coolant accident (LOCA) showed that the behavior of Zr-barrier cladding is virtually indistinguishable from that of conventional Zircaloy cladding. If steam contacts the zirconium liner surface through a cladding perforation and the fuel rod is operated under normal power conditions, the zirconium liner is oxidized more rapidly than is Zircaloy, but the oxidation rate returns to the rate of Zircaloy oxidation when the oxide phase reaches the zirconium-Zircaloy metallurgical bond
Social attribution in anorexia nervosa.
Oldershaw, Anna; DeJong, Hannah; Hambrook, David; Schmidt, Ulrike
2018-05-01
People with anorexia nervosa (AN) report socioemotional difficulties; however, measurement has been criticised for lacking ecological validity and the state or trait nature of difficulties remains unclear. Participants (n = 122) were recruited across 3 groups: people who are currently ill with AN (n = 40); people who recovered (RecAN, n = 18); healthy-control participants (n = 64). Participants completed clinical questionnaires and the Social Attribution Task. The Social Attribution Task involves describing an animation of moving shapes, scored for number of propositions offered, accuracy, and social relevance. Groups were compared cross-sectionally. Those with current AN were assessed prepsychological and postpsychological treatments. People with AN provided fewer propositions than other groups and fewer salient social attributions than healthy-control participants. Those who recovered scored intermediately and not significantly different from either group. Following treatment, people with AN demonstrated (nonsignificant) improvements, and no significance between group differences were observed. Findings suggest difficulties for people with AN in providing spontaneous social narrative and in identifying social salience. Copyright © 2018 John Wiley & Sons, Ltd and Eating Disorders Association.
Attribution of climate extreme events
Trenberth, Kevin E.; Fasullo, John T.; Shepherd, Theodore G.
2015-08-01
There is a tremendous desire to attribute causes to weather and climate events that is often challenging from a physical standpoint. Headlines attributing an event solely to either human-induced climate change or natural variability can be misleading when both are invariably in play. The conventional attribution framework struggles with dynamically driven extremes because of the small signal-to-noise ratios and often uncertain nature of the forced changes. Here, we suggest that a different framing is desirable, which asks why such extremes unfold the way they do. Specifically, we suggest that it is more useful to regard the extreme circulation regime or weather event as being largely unaffected by climate change, and question whether known changes in the climate system's thermodynamic state affected the impact of the particular event. Some examples briefly illustrated include 'snowmaggedon' in February 2010, superstorm Sandy in October 2012 and supertyphoon Haiyan in November 2013, and, in more detail, the Boulder floods of September 2013, all of which were influenced by high sea surface temperatures that had a discernible human component.
Morphosemantic Attributes of Meetei Proverbs
Directory of Open Access Journals (Sweden)
Lourembam Surjit Singh
2015-06-01
Full Text Available This study proposes to investigate the functions of morphosemantic in Meetei proverbs, particularly the attribution of different meanings of the lexical items in Meetei Proverbial verbs. Meetei society has been using proverbs in the all ages, stages of development, social changes, and cultural diversifications to mark their wisdom of social expertise. Meetei used proverbs as an important aspect of verbal discourses within the socio-cultural and ethno-civilization contexts in which skills, knowledge, ideas, emotion, and experiences are communicating. The language used in proverbs reflects the Meetei’s status of life, food habits, belief systems, philosophy, cultural and social orientations. At the same time, various meanings attribute in Meetei proverbs in the forms of figurative, witty, pithy, didactic etc. The construction of these forms are grammatically insightful thereby creating spaces for a whole range of possibilities for investigating the features, functions and structure of verbal inflectional markers occurred in Meetei proverbial sentences. Keywords: Proverbs, morphosemantics, features of lexical items, attributes of meanings and language
Regular Expression Matching and Operational Semantics
Directory of Open Access Journals (Sweden)
Asiri Rathnayake
2011-08-01
Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.
Regularities, Natural Patterns and Laws of Nature
Directory of Open Access Journals (Sweden)
Stathis Psillos
2014-02-01
Full Text Available The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology. Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.
International Nuclear Information System (INIS)
Ingman, Wendy
2014-01-01
The skin and lungs are two tissues that are frequently bombarded with cancer-initiating factors, such as ultraviolet rays from the sun and smoke and pollutants in the air we breathe. Yet breast cancer is the most common type of cancer in Australian women, affecting one in eight before the age of 85. It is more common than skin melanoma and lung cancer. Why, then, does the breast so commonly get cancer when it is not a tissue that is particularly exposed to the environmental agents that increase cancer risk in other major organs? Is there something unique about this tissue that makes it particularly susceptible? The breast undergoes cellular changes over the course of the monthly menstrual cycle, and and these changes affect cancer susceptibility. Rising levels of the hormones oestrogen and progesterone occur immediately after the egg is released from the ovary, and these hormones cause the breast cells to divide and change to accommodate further development if pregnancy occurs. If the woman becomes pregnant, the cells in the breast continue to develop and become the milk-producing structures required to feed a newborn baby. But if pregnancy does not occur there is a drop in progesterone, which triggers the death of the newly developed breast cells. This occurs at the same time women have their period. Then the cycle starts again, and continues every month until menopause, unless the woman becomes pregnant.
International Nuclear Information System (INIS)
Richardson, P.J.
1989-01-01
UK NIREX, the body with responsibility for finding an acceptable strategy for deposition of radioactive waste has given the impression throughout its recent public consultation that the problem of nuclear waste is one of public and political acceptability, rather than one of a technical nature. However the results of the consultation process show that it has no mandate from the British public to develop a single, national, deep repository for the burial of radioactive waste. There is considerable opposition to this method of managing radioactive waste and suspicion of the claims by NIREX concerning the supposed integrity and safety of this deep burial option. This report gives substance to those suspicions and details the significant areas of uncertainty in the concept of effective geological containment of hazardous radioactive elements, which remain dangerous for tens of thousands of years. Because the science of geology is essentially retrospective rather than predictive, NIREX's plans for a single, national, deep 'repository' depend heavily upon a wide range of assumptions about the geological and hydrogeological regimes in certain areas of the UK. This report demonstrates that these assumptions are based on a limited understanding of UK geology and on unvalidated and simplistic theoretical models of geological processes, the performance of which can never be directly tested over the long time-scales involved. NIREX's proposals offer no guarantees for the safe and effective containment of radioactivity. They are deeply flawed. This report exposes the faults. (author)
"Polite People" and Military Meekness: the Attributes of Military Ethics
Directory of Open Access Journals (Sweden)
Pavel V. Didov
2016-12-01
Full Text Available The article analyzes the phenomenon of "polite people" from the point of view of the history and theory of ethical thought. Identify and specify ethical principles that form the basis of military courtesy. On the basis of the revealed regularities, the study proves that ethics is impossible without a certain power attributes, which constitute its core. In relation to the traditions of Russian warriors revealed the key role to their formation of the Orthodox ethics and the military of meekness. The obtained results can serve as material for educational activities for the formation of fighting spirit.
Fractional Regularization Term for Variational Image Registration
Directory of Open Access Journals (Sweden)
Rafael Verdú-Monedero
2009-01-01
Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.
International Nuclear Information System (INIS)
Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.
2004-01-01
We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)
Online Manifold Regularization by Dual Ascending Procedure
Directory of Open Access Journals (Sweden)
Boliang Sun
2013-01-01
Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.
An evolution of image source camera attribution approaches.
Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul
2016-05-01
researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Regular transport dynamics produce chaotic travel times.
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Regularity of difference equations on Banach spaces
Agarwal, Ravi P; Lizama, Carlos
2014-01-01
This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.
PET regularization by envelope guided conjugate gradients
International Nuclear Information System (INIS)
Kaufman, L.; Neumaier, A.
1996-01-01
The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations
Matrix regularization of embedded 4-manifolds
International Nuclear Information System (INIS)
Trzetrzelewski, Maciej
2012-01-01
We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).
Environmental Monitoring Of Microbiological Laboratory: Expose Plate Method
International Nuclear Information System (INIS)
Yahaya Talib; Othman Mahmud; Noraisyah Mohd Yusof; Asmah Mohibat; Muhamad Syazwan Zulkifli
2013-01-01
Monitoring of microorganism is important and conducted regularly on environment of microbiological laboratory at Medical Technology Division. Its objective is to ensure the quality of working environment is maintained according to microbial contamination, consequently to assure the quality of microbiological tests. This paper presents report of environmental monitoring since year 2007. The test involved was bacterial colony counts after the growth media was exposed to air at identified location. (author)
On a correspondence between regular and non-regular operator monotone functions
DEFF Research Database (Denmark)
Gibilisco, P.; Hansen, Frank; Isola, T.
2009-01-01
We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....
Postdetonation nuclear debris for attribution.
Fahey, A J; Zeissler, C J; Newbury, D E; Davis, J; Lindstrom, R M
2010-11-23
On the morning of July 16, 1945, the first atomic bomb was exploded in New Mexico on the White Sands Proving Ground. The device was a plutonium implosion device similar to the device that destroyed Nagasaki, Japan, on August 9 of that same year. Recently, with the enactment of US public law 111-140, the "Nuclear Forensics and Attribution Act," scientists in the government and academia have been able, in earnest, to consider what type of forensic-style information may be obtained after a nuclear detonation. To conduct a robust attribution process for an exploded device placed by a nonstate actor, forensic analysis must yield information about not only the nuclear material in the device but about other materials that went into its construction. We have performed an investigation of glassed ground debris from the first nuclear test showing correlations among multiple analytical techniques. Surprisingly, there is strong evidence, obtainable only through microanalysis, that secondary materials used in the device can be identified and positively associated with the nuclear material.
International Nuclear Information System (INIS)
Bliss, Mary; Jordan, David V.; Barnett, Debra S.; Redding, Rebecca L.; Pearce, Stephen K.
2007-01-01
Euratom performs safeguards monitoring of Fresh MOX fuel for domestic power production in the European Union. Video cameras monitor the reactor storage ponds. If video surveillance is lost for a certain amount of time a measurement is required to verify that no fuel was diverted. The attribute measurement to verify the continued presence of MOX fuel is neutron emission. Ideally this measurement would be made without moving or handling the fuel rod assembly. A prototype attribute measurement system was made using scintillating neutron sensitive glass waveguides developed by Pacific Northwest National Laboratory. Short lengths (5-20 cm) of the neutron sensitive fiber were mechanically spliced to 15 m lengths of commercial high numerical aperture fiber optic cable (Ceramoptec Optran Ultra 0.44). The light detector is a Hamamatsu R7400P photomultiplier tube. An electronics package was built to use the sensors with a GBS Elektronik MCA-166 multichannel analyzer and user interface. The MCA-166 is the system most commonly used by Euratom inspectors. It can also be run from a laptop computer using Maestro (Ortec) or other software. A MCNP model was made to compare to measurements made with several neutron sources including NIST traceable 252 Cf
Regularity and irreversibility of weekly travel behavior
Kitamura, R.; van der Hoorn, A.I.J.M.
1987-01-01
Dynamic characteristics of travel behavior are analyzed in this paper using weekly travel diaries from two waves of panel surveys conducted six months apart. An analysis of activity engagement indicates the presence of significant regularity in weekly activity participation between the two waves.
Regular and context-free nominal traces
DEFF Research Database (Denmark)
Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca
2017-01-01
Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...
Faster 2-regular information-set decoding
Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.
2011-01-01
Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free-path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Regular Gleason Measures and Generalized Effect Algebras
Dvurečenskij, Anatolij; Janda, Jiří
2015-12-01
We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.
Regularization of finite temperature string theories
International Nuclear Information System (INIS)
Leblanc, Y.; Knecht, M.; Wallet, J.C.
1990-01-01
The tachyonic divergences occurring in the free energy of various string theories at finite temperature are eliminated through the use of regularization schemes and analytic continuations. For closed strings, we obtain finite expressions which, however, develop an imaginary part above the Hagedorn temperature, whereas open string theories are still plagued with dilatonic divergences. (orig.)
A Sim(2 invariant dimensional regularization
Directory of Open Access Journals (Sweden)
J. Alfaro
2017-09-01
Full Text Available We introduce a Sim(2 invariant dimensional regularization of loop integrals. Then we can compute the one loop quantum corrections to the photon self energy, electron self energy and vertex in the Electrodynamics sector of the Very Special Relativity Standard Model (VSRSM.
Continuum regularized Yang-Mills theory
International Nuclear Information System (INIS)
Sadun, L.A.
1987-01-01
Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions
Gravitational lensing by a regular black hole
International Nuclear Information System (INIS)
Eiroa, Ernesto F; Sendra, Carlos M
2011-01-01
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Gravitational lensing by a regular black hole
Energy Technology Data Exchange (ETDEWEB)
Eiroa, Ernesto F; Sendra, Carlos M, E-mail: eiroa@iafe.uba.ar, E-mail: cmsendra@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)
2011-04-21
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Analytic stochastic regularization and gange invariance
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1986-05-01
A proof that analytic stochastic regularization breaks gauge invariance is presented. This is done by an explicit one loop calculation of the vaccum polarization tensor in scalar electrodynamics, which turns out not to be transversal. The counterterm structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization, are also analysed. (Author) [pt
Annotation of regular polysemy and underspecification
DEFF Research Database (Denmark)
Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria
2013-01-01
We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...
Stabilization, pole placement, and regular implementability
Belur, MN; Trentelman, HL
In this paper, we study control by interconnection of linear differential systems. We give necessary and sufficient conditions for regular implementability of a-given linear, differential system. We formulate the problems of stabilization and pole placement as problems of finding a suitable,
12 CFR 725.3 - Regular membership.
2010-01-01
... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit....5(b) of this part, and forwarding with its completed application funds equal to one-half of this... 1, 1979, is not required to forward these funds to the Facility until October 1, 1979. (3...
Supervised scale-regularized linear convolutionary filters
DEFF Research Database (Denmark)
Loog, Marco; Lauze, Francois Bernard
2017-01-01
also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...
On regular riesz operators | Raubenheimer | Quaestiones ...
African Journals Online (AJOL)
The r-asymptotically quasi finite rank operators on Banach lattices are examples of regular Riesz operators. We characterise Riesz elements in a subalgebra of a Banach algebra in terms of Riesz elements in the Banach algebra. This enables us to characterise r-asymptotically quasi finite rank operators in terms of adjoint ...
Regularized Discriminant Analysis: A Large Dimensional Study
Yang, Xiaoke
2018-04-28
In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free- path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Bit-coded regular expression parsing
DEFF Research Database (Denmark)
Nielsen, Lasse; Henglein, Fritz
2011-01-01
the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...
Tetravalent one-regular graphs of order 4p2
DEFF Research Database (Denmark)
Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan
2014-01-01
A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....
Source attribution of tropospheric ozone
Butler, T. M.
2015-12-01
Tropospheric ozone is a harmful pollutant with adverse effects on human health and ecosystems. As well as these effects, tropospheric ozone is also a powerful greenhouse gas, with an anthropogenic radiative forcing one quarter of that of CO2. Along with methane and atmospheric aerosol, tropospheric ozone belongs to the so-called Short Lived Climate forcing Pollutants, or SLCP. Recent work has shown that efforts to reduce concentrations of SLCP in the atmosphere have the potential to slow the rate of near-term climate change, while simultaneously improving public health and reducing crop losses. Unlike many other SLCP, tropospehric ozone is not directly emitted, but is instead influenced by two distinct sources: transport of air from the ozone-rich stratosphere; and photochemical production in the troposphere from the emitted precursors NOx (oxides of nitrogen), CO (Carbon Monoxide), and VOC (volatile organic compounds, including methane). Better understanding of the relationship between ozone production and the emissions of its precursors is essential for the development of targeted emission reduction strategies. Several modeling methods have been employed to relate the production of tropospheric ozone to emissions of its precursors; emissions perturbation, tagging, and adjoint sensitivity methods all deliver complementary information about modelled ozone production. Most studies using tagging methods have focused on attribution of tropospheric ozone production to emissions of NOx, even though perturbation methods have suggested that tropospheric ozone is also sensitive to VOC, particularly methane. In this study we describe the implementation into a global chemistry-climate model of a scheme for tagging emissions of NOx and VOC with an arbitrary number of labels, which are followed through the chemical reactions of tropospheric ozone production in order to perform attribution of tropospehric ozone to its emitted precursors. Attribution is performed to both
The critical attributes of leadership.
Campbell, C A
1992-11-01
The final decade of this century is a period of unprecedented change that by all indicators will continue unabated well into the next millennium. This article explored some elemental and immutable truths about leadership, management, communication, and negotiation essential to organizational success, particularly during periods of accelerated change. The case is made for a level of integrity, ethical conduct, and self-control to match the technical competence essential for managerial success in a technologically intensive work environment. These attributes and skills coupled with a widening scope of institutional vision are critical to sustained leadership and growth in an unstable world. Those without these abilities will be diminished in their capacity to communicate or negotiate. Hence, they will be thwarted or powerless to create task attraction, to effect change, or to promote excellence. These lessons are applicable to the dynamic changes occurring within the health care industrial complex, including health information management.
Frequency of marriage and live birth among survivors prenatally exposed to the atomic bomb
International Nuclear Information System (INIS)
Blot, W.J.; Shimizu, Y.; Kato, H.; Miller, R.W.
1975-01-01
Frequency of marriage and birth as of January 1973 was determined for persons exposed in utero to the atomic bombs in 1945 and for controls. The marriage rate was lower in persons heavily exposed in utero than in the non-exposed or lightly exposed. This difference is attributed partly to the lesser marriageability of persons with mental retardation who are significantly more numerous among the heavily exposed, and partly to unmeasured variables, possibly including social discrimination against survivors of the atomic bomb. No consistent relation was observed between radiation exposure and three reproductive indices: childless marriages, number of births, and interval between marriage and first birth
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
van der Aa, Jeroen; Honing, Henkjan; ten Cate, Carel
2015-06-01
Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous and an irregular stimulus. However, when the tempo of the isochronous stimulus is changed, it is no longer treated as similar to the training stimulus. Training with three isochronous and three irregular stimuli did not result in improvement of the generalization. In contrast, humans, exposed to the same stimuli, readily generalized across tempo changes. Our results suggest that zebra finches distinguish the different stimuli by learning specific local temporal features of each individual stimulus rather than attending to the global structure of the stimuli, i.e., to the temporal regularity. Copyright © 2015 Elsevier B.V. All rights reserved.
Extreme values, regular variation and point processes
Resnick, Sidney I
1987-01-01
Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...
Stream Processing Using Grammars and Regular Expressions
DEFF Research Database (Denmark)
Rasmussen, Ulrik Terp
disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...
Describing chaotic attractors: Regular and perpetual points
Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz
2018-03-01
We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.
Chaos regularization of quantum tunneling rates
International Nuclear Information System (INIS)
Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward
2011-01-01
Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Contour Propagation With Riemannian Elasticity Regularization
DEFF Research Database (Denmark)
Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.
2011-01-01
Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...
Thin accretion disk around regular black hole
Directory of Open Access Journals (Sweden)
QIU Tianqi
2014-08-01
Full Text Available The Penrose′s cosmic censorship conjecture says that naked singularities do not exist in nature.So,it seems reasonable to further conjecture that not even a singularity exists in nature.In this paper,a regular black hole without singularity is studied in detail,especially on its thin accretion disk,energy flux,radiation temperature and accretion efficiency.It is found that the interaction of regular black hole is stronger than that of the Schwarzschild black hole. Furthermore,the thin accretion will be more efficiency to lost energy while the mass of black hole decreased. These particular properties may be used to distinguish between black holes.
Convex nonnegative matrix factorization with manifold regularization.
Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong
2015-03-01
Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
A short proof of increased parabolic regularity
Directory of Open Access Journals (Sweden)
Stephen Pankavich
2015-08-01
Full Text Available We present a short proof of the increased regularity obtained by solutions to uniformly parabolic partial differential equations. Though this setting is fairly introductory, our new method of proof, which uses a priori estimates and an inductive method, can be extended to prove analogous results for problems with time-dependent coefficients, advection-diffusion or reaction diffusion equations, and nonlinear PDEs even when other tools, such as semigroup methods or the use of explicit fundamental solutions, are unavailable.
Regular black hole in three dimensions
Myung, Yun Soo; Yoon, Myungseok
2008-01-01
We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.
Sparse regularization for force identification using dictionaries
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Analytic stochastic regularization and gauge theories
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1987-04-01
We prove that analytic stochatic regularization braks gauge invariance. This is done by an explicit one loop calculation of the two three and four point vertex functions of the gluon field in scalar chromodynamics, which turns out not to be geuge invariant. We analyse the counter term structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization. (author) [pt
Preconditioners for regularized saddle point matrices
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2011-01-01
Roč. 19, č. 2 (2011), s. 91-112 ISSN 1570-2820 Institutional research plan: CEZ:AV0Z30860518 Keywords : saddle point matrices * preconditioning * regularization * eigenvalue clustering Subject RIV: BA - General Mathematics Impact factor: 0.533, year: 2011 http://www.degruyter.com/view/j/jnma.2011.19.issue-2/jnum.2011.005/jnum.2011.005. xml
Analytic stochastic regularization: gauge and supersymmetry theories
International Nuclear Information System (INIS)
Abdalla, M.C.B.
1988-01-01
Analytic stochastic regularization for gauge and supersymmetric theories is considered. Gauge invariance in spinor and scalar QCD is verified to brak fown by an explicit one loop computation of the two, theree and four point vertex function of the gluon field. As a result, non gauge invariant counterterms must be added. However, in the supersymmetric multiplets there is a cancellation rendering the counterterms gauge invariant. The calculation is considered at one loop order. (author) [pt
Regularized forecasting of chaotic dynamical systems
International Nuclear Information System (INIS)
Bollt, Erik M.
2017-01-01
While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.
Minimal length uncertainty relation and ultraviolet regularization
Kempf, Achim; Mangano, Gianpiero
1997-06-01
Studies in string theory and quantum gravity suggest the existence of a finite lower limit Δx0 to the possible resolution of distances, at the latest on the scale of the Planck length of 10-35 m. Within the framework of the Euclidean path integral we explicitly show ultraviolet regularization in field theory through this short distance structure. Both rotation and translation invariance can be preserved. An example is studied in detail.
Regularity and chaos in cavity QED
International Nuclear Information System (INIS)
Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G
2017-01-01
The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)
Solution path for manifold regularized semisupervised classification.
Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H
2012-04-01
Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.
Regularizations: different recipes for identical situations
International Nuclear Information System (INIS)
Gambin, E.; Lobo, C.O.; Battistel, O.A.
2004-03-01
We present a discussion where the choice of the regularization procedure and the routing for the internal lines momenta are put at the same level of arbitrariness in the analysis of Ward identities involving simple and well-known problems in QFT. They are the complex self-interacting scalar field and two simple models where the SVV and AVV process are pertinent. We show that, in all these problems, the conditions to symmetry relations preservation are put in terms of the same combination of divergent Feynman integrals, which are evaluated in the context of a very general calculational strategy, concerning the manipulations and calculations involving divergences. Within the adopted strategy, all the arbitrariness intrinsic to the problem are still maintained in the final results and, consequently, a perfect map can be obtained with the corresponding results of the traditional regularization techniques. We show that, when we require an universal interpretation for the arbitrariness involved, in order to get consistency with all stated physical constraints, a strong condition is imposed for regularizations which automatically eliminates the ambiguities associated to the routing of the internal lines momenta of loops. The conclusion is clean and sound: the association between ambiguities and unavoidable symmetry violations in Ward identities cannot be maintained if an unique recipe is required for identical situations in the evaluation of divergent physical amplitudes. (author)
Influence of whitening and regular dentifrices on orthodontic clear ligature color stability.
Oliveira, Adauê S; Kaizer, Marina R; Salgado, Vinícius E; Soldati, Dener C; Silva, Roberta C; Moraes, Rafael R
2015-01-01
This study evaluated the effect of brushing orthodontic clear ligatures with a whitening dentifrice containing a blue pigment (Close Up White Now, Unilever, London, UK) on their color stability, when exposed to a staining agent. Ligatures from 3M Unitek (Monrovia, CA, USA) and Morelli (Sorocaba, SP, Brazil) were tested. Baseline color measurements were performed and nonstained groups (control) were stored in distilled water whereas test groups were exposed for 1 hour daily to red wine. Specimens were brushed daily using regular or whitening dentifrice. Color measurements were repeated after 7, 14, 21, and 28 days using a spectrophotometer based on the CIE L*a*b* system. Decreased luminosity (CIE L*), increased red discoloration (CIE a* axis), and increased yellow discoloration (CIE b* axis) were generally observed for ligatures exposed to the staining agent. Color variation was generally lower in specimens brushed with regular dentifrice, but ligatures brushed with whitening dentifrice were generally less red and less yellow than regular dentifrice. The whitening dentifrice led to blue discoloration trend, with visually detectable differences particularly apparent according to storage condition and ligature brand. The whitening dentifrice containing blue pigment did not improve the ligature color stability, but it decreased yellow discoloration and increased a blue coloration. The use of a whitening dentifrice containing blue pigment during orthodontic treatment might decrease the yellow discoloration of elastic ligatures. © 2015 Wiley Periodicals, Inc.
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal
Attributes and descriptors for building performance evaluation
Directory of Open Access Journals (Sweden)
S. Gopikrishnan
2017-12-01
In order to obtain the right feedback in levels of satisfaction with respect to these attributes, there is a need to have appropriate descriptors for incorporation in a survey instrument. This paper identifies attributes that indicate building performance and provides simple description of these attributes based on which items can be generated for a questionnaire. Such items can enable any user/occupant to easily understand the characteristics of these attributes and offer an objective feedback during questionnaire survey.
Sparsity regularization for parameter identification problems
International Nuclear Information System (INIS)
Jin, Bangti; Maass, Peter
2012-01-01
The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some
Attribution Theory and Crisis Intervention Therapy.
Skilbeck, William M.
It was proposed that existing therapeutic procedures may influence attributions about emotional states. Therefore an attributional analysis of crisis intervention, a model of community-based, short-term consultation, was presented. This analysis suggested that crisis intervention provides attributionally-relevant information about both the source…
Learning Sparse Visual Representations with Leaky Capped Norm Regularizers
Wangni, Jianqiao; Lin, Dahua
2017-01-01
Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...
Temporal regularity of the environment drives time perception
van Rijn, H; Rhodes, D; Di Luca, M
2016-01-01
It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...
Directory of Open Access Journals (Sweden)
Dustin Kai Yan Lau
2014-03-01
Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject
An investigation of the general regularity of size dependence of reaction kinetics of nanoparticles
International Nuclear Information System (INIS)
Cui, Zixiang; Duan, Huijuan; Xue, Yongqiang; Li, Ping
2015-01-01
In the processes of preparation and application of nanomaterials, the chemical reactions of nanoparticles are often involved, and the size of nanoparticles has dramatic influence on the reaction kinetics. Nevertheless, there are many conflicts on regularities of size dependence of reaction kinetic parameters, and these conflicts have not been explained so far. In this paper, taking the reaction of nano-ZnO (average diameter is from 20.96 to 53.31 nm) with acrylic acid solution as a system, the influence regularities of the particle size on the kinetic parameters were researched. The regularities were consistent with that in most literatures, but inconsistent with that in a few of literatures, the reasons for the conflicts were interpreted. The reasons can be attributed to two factors: one is improper data processing for fewer data points, and the other is the difference between solid particles and porous particles. A general regularity of the size dependence of reaction kinetics for solid particles was obtained. The regularity shows that with the size of nanoparticles decreasing, the rate constant and the reaction order increase, while the apparent activation energy and the pre-exponential factor decrease; and the relationships of the logarithm of rate constant, the logarithm of pre-exponential factor, and the apparent activation energy to the reciprocal of the particle size are linear, respectively
Evolutionary Influences on Attribution and Affect
Directory of Open Access Journals (Sweden)
Jennie Brown
2017-12-01
Full Text Available Evolutionary theory was applied to Reeder and Brewer's schematic theory and Trafimow's affect theory to extend this area of research with five new predictions involving affect and ability attributions, comparing morality and ability attributions, gender differences, and reaction times for affect and attribution ratings. The design included a 2 (Trait Dimension Type: HR, PR × 2 (Behavior Type: morality, ability × 2 (Valence: positive, negative × 2 (Replication: original, replication × 2 (Sex: female or male actor × 2 (Gender: female or male participant × 2 (Order: attribution portion first, affect portion first mixed design. All factors were within participants except the order and participant gender. Participants were presented with 32 different scenarios in which an actor engaged in a concrete behavior after which they made attributions and rated their affect in response to the behavior. Reaction times were measured during attribution and affect ratings. In general, the findings from the experiment supported the new predictions. Affect was related to attributions for both morality and ability related behaviors. Morality related behaviors received more extreme attribution and affect ratings than ability related behaviors. Female actors received stronger attribution and affect ratings for diagnostic morality behaviors compared to male actors. Male and female actors received similar attribution and affect ratings for diagnostic ability behaviors. Diagnostic behaviors were associated with lower reaction times than non-diagnostic behaviors. These findings demonstrate the utility of evolutionary theory in creating new hypotheses and empirical findings in the domain of attribution.
Age-related patterns of drug use initiation among polydrug using regular psychostimulant users.
Darke, Shane; Kaye, Sharlene; Torok, Michelle
2012-09-01
To determine age-related patterns of drug use initiation, drug sequencing and treatment entry among regular psychostimulant users. Cross-sectional study of 269 regular psychostimulant users, administered a structured interview examining onset of use for major licit and illicit drugs. The mean age at first intoxication was not associated with age or gender. In contrast, younger age was associated with earlier ages of onset for all of the illicit drug classes. Each additional year of age was associated with a 4 month increase in onset age for methamphetamine, and 3 months for heroin. By the age of 17, those born prior to 1961 had, on average, used only tobacco and alcohol, whereas those born between 1986 and 1990 had used nine different drug classes. The period between initial use and the transition to regular use, however, was stable. Age was also negatively correlated with both age at initial injection and regular injecting. Onset sequences, however, remained stable. Consistent with the age-related patterns of drug use, each additional year of age associated with a 0.47 year increase in the age at first treatment. While the age at first intoxication appeared stable, the trajectory through illicit drug use was substantially truncated. The data indicate that, at least among those who progress to regular illicit drug use, younger users are likely to be exposed to far broader polydrug use in their teens than has previously been the case. © 2012 Australasian Professional Society on Alcohol and other Drugs.
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim
2015-01-01
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla
2015-10-26
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
The use of regularization in inferential measurements
International Nuclear Information System (INIS)
Hines, J. Wesley; Gribok, Andrei V.; Attieh, Ibrahim; Uhrig, Robert E.
1999-01-01
Inferential sensing is the prediction of a plant variable through the use of correlated plant variables. A correct prediction of the variable can be used to monitor sensors for drift or other failures making periodic instrument calibrations unnecessary. This move from periodic to condition based maintenance can reduce costs and increase the reliability of the instrument. Having accurate, reliable measurements is important for signals that may impact safety or profitability. This paper investigates how collinearity adversely affects inferential sensing by making the results inconsistent and unrepeatable; and presents regularization as a potential solution (author) (ml)
Regularization ambiguities in loop quantum gravity
International Nuclear Information System (INIS)
Perez, Alejandro
2006-01-01
One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find
Effort variation regularization in sound field reproduction
DEFF Research Database (Denmark)
Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis
2010-01-01
In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...
New regularities in mass spectra of hadrons
International Nuclear Information System (INIS)
Kajdalov, A.B.
1989-01-01
The properties of bosonic and baryonic Regge trajectories for hadrons composed of light quarks are considered. Experimental data agree with an existence of daughter trajectories consistent with string models. It is pointed out that the parity doubling for baryonic trajectories, observed experimentally, is not understood in the existing quark models. Mass spectrum of bosons and baryons indicates to an approximate supersymmetry in the mass region M>1 GeV. These regularities indicates to a high degree of symmetry for the dynamics in the confinement region. 8 refs.; 5 figs
Total-variation regularization with bound constraints
International Nuclear Information System (INIS)
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
Bayesian regularization of diffusion tensor images
DEFF Research Database (Denmark)
Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif
2007-01-01
Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...
Indefinite metric and regularization of electrodynamics
International Nuclear Information System (INIS)
Gaudin, M.
1984-06-01
The invariant regularization of Pauli and Villars in quantum electrodynamics can be considered as deriving from a local and causal lagrangian theory for spin 1/2 bosons, by introducing an indefinite metric and a condition on the allowed states similar to the Lorentz condition. The consequences are the asymptotic freedom of the photon's propagator. We present a calcultion of the effective charge to the fourth order in the coupling as a function of the auxiliary masses, the theory avoiding all mass divergencies to this order [fr
Strategies for regular segmented reductions on GPU
DEFF Research Database (Denmark)
Larsen, Rasmus Wriedt; Henriksen, Troels
2017-01-01
We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...
5-D interpolation with wave-front attributes
Xie, Yujiang; Gajewski, Dirk
2017-11-01
Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that
Emotion regulation deficits in regular marijuana users.
Zimmermann, Kaeli; Walz, Christina; Derckx, Raissa T; Kendrick, Keith M; Weber, Bernd; Dore, Bruce; Ochsner, Kevin N; Hurlemann, René; Becker, Benjamin
2017-08-01
Effective regulation of negative affective states has been associated with mental health. Impaired regulation of negative affect represents a risk factor for dysfunctional coping mechanisms such as drug use and thus could contribute to the initiation and development of problematic substance use. This study investigated behavioral and neural indices of emotion regulation in regular marijuana users (n = 23) and demographically matched nonusing controls (n = 20) by means of an fMRI cognitive emotion regulation (reappraisal) paradigm. Relative to nonusing controls, marijuana users demonstrated increased neural activity in a bilateral frontal network comprising precentral, middle cingulate, and supplementary motor regions during reappraisal of negative affect (P marijuana users relative to controls. Together, the present findings could reflect an unsuccessful attempt of compensatory recruitment of additional neural resources in the context of disrupted amygdala-prefrontal interaction during volitional emotion regulation in marijuana users. As such, impaired volitional regulation of negative affect might represent a consequence of, or risk factor for, regular marijuana use. Hum Brain Mapp 38:4270-4279, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Efficient multidimensional regularization for Volterra series estimation
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Multiple graph regularized nonnegative matrix factorization
Wang, Jim Jing-Yan
2013-10-01
Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.
Accelerating Large Data Analysis By Exploiting Regularities
Moran, Patrick J.; Ellsworth, David
2003-01-01
We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.
Supporting Regularized Logistic Regression Privately and Efficiently.
Directory of Open Access Journals (Sweden)
Wenfa Li
Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Multiview Hessian regularization for image annotation.
Liu, Weifeng; Tao, Dacheng
2013-07-01
The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.
EIT image reconstruction with four dimensional regularization.
Dai, Tao; Soleimani, Manuchehr; Adler, Andy
2008-09-01
Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.
Accretion onto some well-known regular black holes
International Nuclear Information System (INIS)
Jawad, Abdul; Shahzad, M.U.
2016-01-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)
Accretion onto some well-known regular black holes
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)
2016-03-15
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)
Accretion onto some well-known regular black holes
Jawad, Abdul; Shahzad, M. Umair
2016-03-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.
On the Detection of Fake Certificates via Attribute Correlation
Directory of Open Access Journals (Sweden)
Xiaojing Gu
2015-06-01
Full Text Available Transport Layer Security (TLS and its predecessor, SSL, are important cryptographic protocol suites on the Internet. They both implement public key certificates and rely on a group of trusted certificate authorities (i.e., CAs for peer authentication. Unfortunately, the most recent research reveals that, if any one of the pre-trusted CAs is compromised, fake certificates can be issued to intercept the corresponding SSL/TLS connections. This security vulnerability leads to catastrophic impacts on SSL/TLS-based HTTPS, which is the underlying protocol to provide secure web services for e-commerce, e-mails, etc. To address this problem, we design an attribute dependency-based detection mechanism, called SSLight. SSLight can expose fake certificates by checking whether the certificates contain some attribute dependencies rarely occurring in legitimate samples. We conduct extensive experiments to evaluate SSLight and successfully confirm that SSLight can detect the vast majority of fake certificates issued from any trusted CAs if they are compromised. As a real-world example, we also implement SSLight as a Firefox add-on and examine its capability of exposing existent fake certificates from DigiNotar and Comodo, both of which have made a giant impact around the world.
Attributional and relational processing in pigeons
Directory of Open Access Journals (Sweden)
Dennis eGarlick
2011-02-01
Full Text Available Six pigeons were trained using a matching-to-sample procedure where sample and rewarded comparisons matched on both attributional (color and relational (horizontal or vertical orientation dimensions. Probes then evaluated the pigeons’ preference to comparisons that varied in these dimensions. A strong preference was found for the attribute of color. The discrimination was not found to transfer to novel colors, however, suggesting that a general color rule had not been learned. Further, when color could not be used to guide responding, some influence of other attributional cues such as shape, but not relational cues, was found. We conclude that pigeons based their performance on attributional properties of but not on relational properties between elements in our matching-to-sample procedure.. Future studies should look at examining other attributes to compare attributional versus relational processing.
Privacy Protection on Multiple Sensitive Attributes
Li, Zhen; Ye, Xiaojun
In recent years, a privacy model called k-anonymity has gained popularity in the microdata releasing. As the microdata may contain multiple sensitive attributes about an individual, the protection of multiple sensitive attributes has become an important problem. Different from the existing models of single sensitive attribute, extra associations among multiple sensitive attributes should be invested. Two kinds of disclosure scenarios may happen because of logical associations. The Q&S Diversity is checked to prevent the foregoing disclosure risks, with an α Requirement definition used to ensure the diversity requirement. At last, a two-step greedy generalization algorithm is used to carry out the multiple sensitive attributes processing which deal with quasi-identifiers and sensitive attributes respectively. We reduce the overall distortion by the measure of Masking SA.
Attribute Learning for SAR Image Classification
Directory of Open Access Journals (Sweden)
Chu He
2017-04-01
Full Text Available This paper presents a classification approach based on attribute learning for high spatial resolution Synthetic Aperture Radar (SAR images. To explore the representative and discriminative attributes of SAR images, first, an iterative unsupervised algorithm is designed to cluster in the low-level feature space, where the maximum edge response and the ratio of mean-to-variance are included; a cross-validation step is applied to prevent overfitting. Second, the most discriminative clustering centers are sorted out to construct an attribute dictionary. By resorting to the attribute dictionary, a representation vector describing certain categories in the SAR image can be generated, which in turn is used to perform the classifying task. The experiments conducted on TerraSAR-X images indicate that those learned attributes have strong visual semantics, which are characterized by bright and dark spots, stripes, or their combinations. The classification method based on these learned attributes achieves better results.
Laplacian embedded regression for scalable manifold regularization.
Chen, Lin; Tsang, Ivor W; Xu, Dong
2012-06-01
Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real
Attributional Style and Depression in Multiple Sclerosis
Arnett, Peter A.
2013-01-01
Several etiologic theories have been proposed to explain depression in the general population. Studying these models and modifying them for use in the multiple sclerosis (MS) population may allow us to better understand depression in MS. According to the reformulated learned helplessness (LH) theory, individuals who attribute negative events to internal, stable, and global causes are more vulnerable to depression. This study differentiated attributional style that was or was not related to MS in 52 patients with MS to test the LH theory in this population and to determine possible differences between illness-related and non-illness-related attributions. Patients were administered measures of attributional style, daily stressors, disability, and depressive symptoms. Participants were more likely to list non-MS-related than MS-related causes of negative events on the Attributional Style Questionnaire (ASQ), and more-disabled participants listed significantly more MS-related causes than did less-disabled individuals. Non-MS-related attributional style correlated with stress and depressive symptoms, but MS-related attributional style did not correlate with disability or depressive symptoms. Stress mediated the effect of non-MS-related attributional style on depressive symptoms. These results suggest that, although attributional style appears to be an important construct in MS, it does not seem to be related directly to depressive symptoms; rather, it is related to more perceived stress, which in turn is related to increased depressive symptoms. PMID:24453767
Key attributes of expert NRL referees.
Morris, Gavin; O'Connor, Donna
2017-05-01
Experiential knowledge of elite National Rugby League (NRL) referees was investigated to determine the key attributes contributing to expert officiating performance. Fourteen current first-grade NRL referees were asked to identify the key attributes they believed contributed to their expert refereeing performance. The modified Delphi method involved a 3-round process of an initial semi-structured interview followed by 2 questionnaires to reach consensus of opinion. The data revealed 25 attributes that were rated as most important that underpin expert NRL refereeing performance. Results illustrate the significance of the cognitive category, with the top 6 ranked attributes all cognitive skills. Of these, the referees ranked decision-making accuracy as the most important attribute, followed by reading the game, communication, game understanding, game management and knowledge of the rules. Player rapport, positioning and teamwork were the top ranked game skill attributes underpinning performance excellence. Expert referees also highlighted a number of psychological attributes (e.g., concentration, composure and mental toughness) that were significant to performance. There were only 2 physiological attributes (fitness, aerobic endurance) that were identified as significant to elite officiating performance. In summary, expert consensus was attained which successfully provided a hierarchy of the most significant attributes of expert NRL refereeing performance.
Quality Attributes and Service-Oriented Architectures
National Research Council Canada - National Science Library
O'Brien, Liam; Bass, Len; Merson, Paulo
2005-01-01
.... Because software architecture is the bridge between mission/business goals and a software-intensive system, and quality attribute requirements drive software architecture design, it is important...
Constrained least squares regularization in PET
International Nuclear Information System (INIS)
Choudhury, K.R.; O'Sullivan, F.O.
1996-01-01
Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort
Regularization destriping of remote sensing imagery
Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle
2017-07-01
We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.
The Regularity of Optimal Irrigation Patterns
Morel, Jean-Michel; Santambrogio, Filippo
2010-02-01
A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.
Singular tachyon kinks from regular profiles
International Nuclear Information System (INIS)
Copeland, E.J.; Saffin, P.M.; Steer, D.A.
2003-01-01
We demonstrate how Sen's singular kink solution of the Born-Infeld tachyon action can be constructed by taking the appropriate limit of initially regular profiles. It is shown that the order in which different limits are taken plays an important role in determining whether or not such a solution is obtained for a wide class of potentials. Indeed, by introducing a small parameter into the action, we are able circumvent the results of a recent paper which derived two conditions on the asymptotic tachyon potential such that the singular kink could be recovered in the large amplitude limit of periodic solutions. We show that this is explained by the non-commuting nature of two limits, and that Sen's solution is recovered if the order of the limits is chosen appropriately
Two-pass greedy regular expression parsing
DEFF Research Database (Denmark)
Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse
2013-01-01
We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...... by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ... and not requiring it to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, or require substantially more log storage and employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The performance of our unoptimized C...
Park, Sohyun; Blanck, Heidi M; Sherry, Bettylou; Jones, Sherry Everett; Pan, Liping
2013-01-01
Limited research shows an inconclusive association between soda intake and asthma, potentially attributable to certain preservatives in sodas. This cross-sectional study examined the association between regular (nondiet)-soda intake and current asthma among a nationally representative sample of high school students. Analysis was based on the 2009 national Youth Risk Behavior Survey and included 15,960 students (grades 9 through 12) with data for both regular-soda intake and current asthma status. The outcome measure was current asthma (ie, told by doctor/nurse that they had asthma and still have asthma). The main exposure variable was regular-soda intake (ie, drank a can/bottle/glass of soda during the 7 days before the survey). Multivariable logistic regression was used to estimate the adjusted odds ratios for regular-soda intake with current asthma after controlling for age, sex, race/ethnicity, weight status, and current cigarette use. Overall, 10.8% of students had current asthma. In addition, 9.7% of students who did not drink regular soda had current asthma, and 14.7% of students who drank regular soda three or more times per day had current asthma. Compared with those who did not drink regular soda, odds of having current asthma were higher among students who drank regular soda two times per day (adjusted odds ratio=1.28; 95% CI 1.02 to 1.62) and three or more times per day (adjusted odds ratio=1.64; 95% CI 1.25 to 2.16). The association between high regular-soda intake and current asthma suggests efforts to reduce regular-soda intake among youth might have benefits beyond improving diet quality. However, this association needs additional research, such as a longitudinal examination. Published by Elsevier Inc.
Unsupervised seismic facies analysis with spatial constraints using regularized fuzzy c-means
Song, Chengyun; Liu, Zhining; Cai, Hanpeng; Wang, Yaojun; Li, Xingming; Hu, Guangmin
2017-12-01
Seismic facies analysis techniques combine classification algorithms and seismic attributes to generate a map that describes main reservoir heterogeneities. However, most of the current classification algorithms only view the seismic attributes as isolated data regardless of their spatial locations, and the resulting map is generally sensitive to noise. In this paper, a regularized fuzzy c-means (RegFCM) algorithm is used for unsupervised seismic facies analysis. Due to the regularized term of the RegFCM algorithm, the data whose adjacent locations belong to same classification will play a more important role in the iterative process than other data. Therefore, this method can reduce the effect of seismic data noise presented in discontinuous regions. The synthetic data with different signal/noise values are used to demonstrate the noise tolerance ability of the RegFCM algorithm. Meanwhile, the fuzzy factor, the neighbour window size and the regularized weight are tested using various values, to provide a reference of how to set these parameters. The new approach is also applied to a real seismic data set from the F3 block of the Netherlands. The results show improved spatial continuity, with clear facies boundaries and channel morphology, which reveals that the method is an effective seismic facies analysis tool.
Discriminative Elastic-Net Regularized Linear Regression.
Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen
2017-03-01
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.; Franek, M.; Schonlieb, C.-B.
2012-01-01
for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations
Incremental projection approach of regularization for inverse problems
Energy Technology Data Exchange (ETDEWEB)
Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)
2016-10-15
This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.
Dimensional regularization and analytical continuation at finite temperature
International Nuclear Information System (INIS)
Chen Xiangjun; Liu Lianshou
1998-01-01
The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given
Bounded Perturbation Regularization for Linear Least Squares Estimation
Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.
2017-01-01
This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded
Regular Generalized Star Star closed sets in Bitopological Spaces
K. Kannan; D. Narasimhan; K. Chandrasekhara Rao; R. Ravikumar
2011-01-01
The aim of this paper is to introduce the concepts of τ1τ2-regular generalized star star closed sets , τ1τ2-regular generalized star star open sets and study their basic properties in bitopological spaces.
Exclusion of children with intellectual disabilities from regular ...
African Journals Online (AJOL)
Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...
39 CFR 6.1 - Regular meetings, annual meeting.
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...
Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis
Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.
2007-01-01
Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…
5 CFR 551.421 - Regular working hours.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...
20 CFR 226.35 - Deductions from regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...
20 CFR 226.34 - Divorced spouse regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...
20 CFR 226.14 - Employee regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...
Accounting Students' Perceptions of Effective Faculty Attributes
Alfraih, Mishari M.; Alanezi, Faisal S.
2016-01-01
Purpose: This study aims to explore the attributes of an effective accounting faculty from the student perspective. It also examines similarities and differences in the perceived importance of these attributes between bachelor's and associate's accounting degree students in two public higher education institutions in Kuwait, namely, Kuwait…
Anonymous Credential Schemes with Encrypted Attributes
Guajardo Merchan, J.; Mennink, B.; Schoenmakers, B.
2011-01-01
In anonymous credential schemes, users obtain credentials on certain attributes from an issuer, and later show these credentials to a relying party anonymously and without fully disclosing the attributes. In this paper, we introduce the notion of (anonymous) credential schemes with encrypted
Attributes Heeded When Representing an Osmosis Problem.
Zuckerman, June Trop
Eighteen high school science students were involved in a study to determine what attributes in the problem statement they need when representing a typical osmosis problem. In order to realize this goal students were asked to solve problems aloud and to explain their answers. Included as a part of the results are the attributes that the students…
Attributional Models of Depression and Marital Distress.
Horneffer, Karen J.; Fincham, Frank D.
1996-01-01
Compares attributional models presented in depression and marital literatures by examining simultaneously their prediction of depressive symptoms and marital distress with 150 married couples. Findings show that a model including paths from depressogenic and distress-maintaining marital attributions to both depressive symptoms and marital distress…
Crisis Workers' Attributions for Domestic Violence.
Madden, Margaret E.
Attributions affect coping with victimization. Battered women who blame their husbands' moods are less likely to leave than are women who blame their husbands' permanent characteristics for the violence. Abused women often have repeated contacts with crisis intervention workers and the attitudes of those workers may affect the attributions made by…
An Exploration of EFL Teachers' Attributions
Ghonsooly, Behzad; Ghanizadeh, Afsaneh; Ghazanfari, Mohammad; Ghabanchi, Zargham
2015-01-01
The present study investigated English as a foreign language (EFL) teachers' attributions of success and failure. It also set out to investigate whether these attributions vary by teachers' age, teaching experience, gender and educational level. To do so, 200 EFL teachers were selected according to convenience sampling among EFL teachers teaching…
Implicational Schemata and the Attribution of Morality.
Reeder, Glenn D.; Spores, John M.
Attribution of a disposition or trait to a person asserts information about the pattern of that person's behavior. Past research has suggested that a moral disposition implies only moral behavior, while an immoral disposition implies both moral and immoral behavior. The effect of these implicational schemata on attributions of morality was…
Attribute-Based Digital Signature System
Ibraimi, L.; Asim, Muhammad; Petkovic, M.
2011-01-01
An attribute-based digital signature system comprises a signature generation unit (1) for signing a message (m) by generating a signature (s) based on a user secret key (SK) associated with a set of user attributes, wherein the signature generation unit (1) is arranged for combining the user secret
Detection and attribution of observed impacts
Cramer, W.; Yohe, G.W.; Auffhammer, M.; Huggel, C.; Molau, U.; Dias, M.A.F.S.; Leemans, R.
2014-01-01
This chapter synthesizes the scientific literature on the detection and attribution of observed changes in natural and human systems in response to observed recent climate change. For policy makers and the public, detection and attribution of observed impacts will be a key element to determine the
Directory of Open Access Journals (Sweden)
Thitiworn Choosong
2014-06-01
Conclusion: The urinary 1-OHP levels of workers exposed to PAHs were high. The accumulation of 1-OHP in the body was not clear although the workers had long working hours with few days off during their working experience. Therefore, a regular day off schedule and rotation shift work during high productive RSS should be set for RSS workers.
Regular Nanoscale Protein Patterns via Directed Adsorption through Self-Assembled DNA Origami Masks.
Ramakrishnan, Saminathan; Subramaniam, Sivaraman; Stewart, A Francis; Grundmeier, Guido; Keller, Adrian
2016-11-16
DNA origami has become a widely used method for synthesizing well-defined nanostructures with promising applications in various areas of nanotechnology, biophysics, and medicine. Recently, the possibility to transfer the shape of single DNA origami nanostructures into different materials via molecular lithography approaches has received growing interest due to the great structural control provided by the DNA origami technique. Here, we use ordered monolayers of DNA origami nanostructures with internal cavities on mica surfaces as molecular lithography masks for the fabrication of regular protein patterns over large surface areas. Exposure of the masked sample surface to negatively charged proteins results in the directed adsorption of the proteins onto the exposed surface areas in the holes of the mask. By controlling the buffer and adsorption conditions, the protein coverage of the exposed areas can be varied from single proteins to densely packed monolayers. To demonstrate the versatility of this approach, regular nanopatterns of four different proteins are fabricated: the single-strand annealing proteins Redβ and Sak, the iron-storage protein ferritin, and the blood protein bovine serum albumin (BSA). We furthermore demonstrate the desorption of the DNA origami mask after directed protein adsorption, which may enable the fabrication of hierarchical patterns composed of different protein species. Because selectivity in adsorption is achieved by electrostatic interactions between the proteins and the exposed surface areas, this approach may enable also the large-scale patterning of other charged molecular species or even nanoparticles.
Rule-based learning of regular past tense in children with specific language impairment.
Smith-Lock, Karen M
2015-01-01
The treatment of children with specific language impairment was used as a means to investigate whether a single- or dual-mechanism theory best conceptualizes the acquisition of English past tense. The dual-mechanism theory proposes that regular English past-tense forms are produced via a rule-based process whereas past-tense forms of irregular verbs are stored in the lexicon. Single-mechanism theories propose that both regular and irregular past-tense verbs are stored in the lexicon. Five 5-year-olds with specific language impairment received treatment for regular past tense. The children were tested on regular past-tense production and third-person singular "s" twice before treatment and once after treatment, at eight-week intervals. Treatment consisted of one-hour play-based sessions, once weekly, for eight weeks. Crucially, treatment focused on different lexical items from those in the test. Each child demonstrated significant improvement on the untreated past-tense test items after treatment, but no improvement on the untreated third-person singular "s". Generalization to untreated past-tense verbs could not be attributed to a frequency effect or to phonological similarity of trained and tested items. It is argued that the results are consistent with a dual-mechanism theory of past-tense inflection.
Accreting fluids onto regular black holes via Hamiltonian approach
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)
2017-08-15
We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)
On the regularized fermionic projector of the vacuum
Finster, Felix
2008-03-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.
On the regularized fermionic projector of the vacuum
International Nuclear Information System (INIS)
Finster, Felix
2008-01-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed
MRI reconstruction with joint global regularization and transform learning.
Tanc, A Korhan; Eksioglu, Ender M
2016-10-01
Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Regularization of the Coulomb scattering problem
International Nuclear Information System (INIS)
Baryshevskii, V.G.; Feranchuk, I.D.; Kats, P.B.
2004-01-01
The exact solution of the Schroedinger equation for the Coulomb potential is used within the scope of both stationary and time-dependent scattering theories in order to find the parameters which determine the regularization of the Rutherford cross section when the scattering angle tends to zero but the distance r from the center remains finite. The angular distribution of the particles scattered in the Coulomb field is studied on rather a large but finite distance r from the center. It is shown that the standard asymptotic representation of the wave functions is inapplicable in the case when small scattering angles are considered. The unitary property of the scattering matrix is analyzed and the 'optical' theorem for this case is discussed. The total and transport cross sections for scattering the particle by the Coulomb center proved to be finite values and are calculated in the analytical form. It is shown that the effects under consideration can be important for the observed characteristics of the transport processes in semiconductors which are determined by the electron and hole scattering by the field of charged impurity centers
Color correction optimization with hue regularization
Zhang, Heng; Liu, Huaping; Quan, Shuxue
2011-01-01
Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.
Wave dynamics of regular and chaotic rays
International Nuclear Information System (INIS)
McDonald, S.W.
1983-09-01
In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space
Regularities and irregularities in order flow data
Theissen, Martin; Krause, Sebastian M.; Guhr, Thomas
2017-11-01
We identify and analyze statistical regularities and irregularities in the recent order flow of different NASDAQ stocks, focusing on the positions where orders are placed in the order book. This includes limit orders being placed outside of the spread, inside the spread and (effective) market orders. Based on the pairwise comparison of the order flow of different stocks, we perform a clustering of stocks into groups with similar behavior. This is useful to assess systemic aspects of stock price dynamics. We find that limit order placement inside the spread is strongly determined by the dynamics of the spread size. Most orders, however, arrive outside of the spread. While for some stocks order placement on or next to the quotes is dominating, deeper price levels are more important for other stocks. As market orders are usually adjusted to the quote volume, the impact of market orders depends on the order book structure, which we find to be quite diverse among the analyzed stocks as a result of the way limit order placement takes place.
Library search with regular reflectance IR spectra
International Nuclear Information System (INIS)
Staat, H.; Korte, E.H.; Lampen, P.
1989-01-01
Characterisation in situ for coatings and other surface layers is generally favourable, but a prerequisite for precious items such as art objects. In infrared spectroscopy only reflection techniques are applicable here. However for attenuated total reflection (ATR) it is difficult to obtain the necessary optical contact of the crystal with the sample, when the latter is not perfectly plane or flexible. The measurement of diffuse reflectance demands a scattering sample and usually the reflectance is very poor. Therefore in most cases one is left with regular reflectance. Such spectra consist of dispersion-like feature instead of bands impeding their interpretation in the way the analyst is used to. Furthermore for computer search in common spectral libraries compiled from transmittance or absorbance spectra a transformation of the reflectance spectra is needed. The correct conversion is based on the Kramers-Kronig transformation. This somewhat time - consuming procedure can be speeded up by using appropriate approximations. A coarser conversion may be obtained from the first derivative of the reflectance spectrum which resembles the second derivative of a transmittance spectrum. The resulting distorted spectra can still be used successfully for the search in peak table libraries. Experiences with both transformations are presented. (author)
Regularities of praseodymium oxide dissolution in acids
International Nuclear Information System (INIS)
Savin, V.D.; Elyutin, A.V.; Mikhajlova, N.P.; Eremenko, Z.V.; Opolchenova, N.L.
1989-01-01
The regularities of Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 interaction with inorganic acids are studied. pH of the solution and oxidation-reduction potential registrated at 20±1 deg C are the working parameters of studies. It is found that the amount of all oxides dissolved increase in the series of acids - nitric, hydrochloric and sulfuric, in this case for hydrochloric and sulfuric acid it increases in the series of oxides Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 . It is noted that Pr 2 O 5 has a high value of oxidation-reduction potential with a positive sign in the whole disslolving range. A low positive value of a redox potential during dissolving belongs to Pr(OH) 3 and in the case of Pr 2 O 3 dissloving redox potential is negative. The schemes of dissolving processes which do not agree with classical assumptions are presented
Regular expressions compiler and some applications
International Nuclear Information System (INIS)
Saldana A, H.
1978-01-01
We deal with high level programming language of a Regular Expressions Compiler (REC). The first chapter is an introduction in which the history of the REC development and the problems related to its numerous applicatons are described. The syntactic and sematic rules as well as the language features are discussed just after the introduction. Concerning the applicatons as examples, an adaptation is given in order to solve numerical problems and another for the data manipulation. The last chapter is an exposition of ideas and techniques about the compiler construction. Examples of the adaptation to numerical problems show the applications to education, vector analysis, quantum mechanics, physics, mathematics and other sciences. The rudiments of an operating system for a minicomputer are the examples of the adaptation to symbolic data manipulaton. REC is a programming language that could be applied to solve problems in almost any human activity. Handling of computer graphics, control equipment, research on languages, microprocessors and general research are some of the fields in which this programming language can be applied and developed. (author)
Sparsity-regularized HMAX for visual recognition.
Directory of Open Access Journals (Sweden)
Xiaolin Hu
Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.
Quantum implications of a scale invariant regularization
Ghilencea, D. M.
2018-04-01
We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).
Regularities development of entrepreneurial structures in regions
Directory of Open Access Journals (Sweden)
Julia Semenovna Pinkovetskaya
2012-12-01
Full Text Available Consider regularities and tendencies for the three types of entrepreneurial structures — small enterprises, medium enterprises and individual entrepreneurs. The aim of the research was to confirm the possibilities of describing indicators of aggregate entrepreneurial structures with the use of normal law distribution functions. Presented proposed by the author the methodological approach and results of construction of the functions of the density distribution for the main indicators for the various objects: the Russian Federation, regions, as well as aggregates ofentrepreneurial structures, specialized in certain forms ofeconomic activity. All the developed functions, as shown by the logical and statistical analysis, are of high quality and well-approximate the original data. In general, the proposed methodological approach is versatile and can be used in further studies of aggregates of entrepreneurial structures. The received results can be applied in solving a wide range of problems justify the need for personnel and financial resources at the federal, regional and municipal levels, as well as the formation of plans and forecasts of development entrepreneurship and improvement of this sector of the economy.
U.S. Department of Health & Human Services — 2005-2009. SAMMEC - Smoking-Attributable Mortality, Morbidity, and Economic Costs. Smoking-attributable mortality (SAM) is the number of deaths caused by cigarette...
U.S. Department of Health & Human Services — 2005-2009. SAMMEC - Smoking-Attributable Mortality, Morbidity, and Economic Costs. Smoking-attributable expenditures (SAEs) are excess health care expenditures...
Discriminative power of visual attributes in dermatology.
Giotis, Ioannis; Visser, Margaretha; Jonkman, Marcel; Petkov, Nicolai
2013-02-01
Visual characteristics such as color and shape of skin lesions play an important role in the diagnostic process. In this contribution, we quantify the discriminative power of such attributes using an information theoretical approach. We estimate the probability of occurrence of each attribute as a function of the skin diseases. We use the distribution of this probability across the studied diseases and its entropy to define the discriminative power of the attribute. The discriminative power has a maximum value for attributes that occur (or do not occur) for only one disease and a minimum value for those which are equally likely to be observed among all diseases. Verrucous surface, red and brown colors, and the presence of more than 10 lesions are among the most informative attributes. A ranking of attributes is also carried out and used together with a naive Bayesian classifier, yielding results that confirm the soundness of the proposed method. proposed measure is proven to be a reliable way of assessing the discriminative power of dermatological attributes, and it also helps generate a condensed dermatological lexicon. Therefore, it can be of added value to the manual or computer-aided diagnostic process. © 2012 John Wiley & Sons A/S.
Quantifying how smokers value attributes of electronic cigarettes.
Nonnemaker, James; Kim, Annice E; Lee, Youn Ok; MacMonegle, Anna
2016-04-01
Rates of electronic cigarette (e-cigarette) use have increased quickly among US adults (3.3% in 2010 to 8.5% in 2013) and youth (4.5% in 2013 to 13.4% in 2014). As state and local governments consider regulatory policies, understanding what smokers believe about e-cigarettes and how they value e-cigarettes is important. Using data from a convenience sample of Florida adult smokers (N=765), we investigated the value smokers place on specific attributes of e-cigarettes (availability of flavours, effectiveness of e-cigarettes as a cessation aid, healthier alternative to regular cigarettes, ability to use e-cigarettes in public places) by asking smokers how much they would be willing to pay for e-cigarettes with and without each of these attributes. For cigarette-only and dual users, losing the ability to use an e-cigarette as a quit aid and losing the harm reduction of an e-cigarette significantly reduced the price respondents were willing to pay for an e-cigarette. For cigarette-only users, not being able to use an e-cigarette indoors and losing flavours also significantly reduced the price respondents were willing to pay for an e-cigarette. Our results suggest that smokers value multiple attributes of e-cigarettes. Our valuation measures also appear to align with smokers' beliefs about e-cigarettes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Reprocessing of nonoptimally exposed holograms
International Nuclear Information System (INIS)
Phipps, G.S.; Robertson, C.E.; Tamashiro, F.M.
1980-01-01
Two reprocessing techniques have been investigated that are capable of correcting the effects of nonoptimum optical density of photographic amplitude holograms recorded on Agfa-Gevaert type 10E75 plates. In some cases a reprocessed hologram will exhibit a diffraction efficiency even higher than that obtainable from a hologram exposed and processed to the optimum density. The SNR of the reprocessed holograms is much higher than that of the same holograms belached with cupric bromide. In some cases the SNR approaches the optimum value for a properly exposed amplitude hologram. Subjective image quality and resolution of reprocessed hologram reconstructins appear to be no different than for normal single-development holograms. Repeated reprocessing is feasible and in some cases desirable as a means of increasing diffraction efficiency
Extending Attribution Theory: Considering Students' Perceived Control of the Attribution Process
Fishman, Evan J.; Husman, Jenefer
2017-01-01
Research in attribution theory has shown that students' causal thinking profoundly affects their learning and motivational outcomes. Very few studies, however, have explored how students' attribution-related beliefs influence the causal thought process. The present study used the perceived control of the attribution process (PCAP) model to examine…
Causal Attributions for Poverty in Developing Countries
Directory of Open Access Journals (Sweden)
José Juan Vázquez
2009-07-01
Full Text Available This paper analyzes attributional differences about causes of poverty in the less developed countries, among Nicaraguan ("actors" and Spanish ("observers" undergraduates. A self–applied questionnaire was used. It included socio–demographic questions and an adaptation of the "Causes of Third World Poverty Questionnaire" (CTWPQ. Results show agreement between Spanish and Nicaraguan in attributions about the main causes of poverty in the less developed countries, although there are differences about the perception of the incidence of the different causes in that situation. Nicaraguan students consider, as causes of poverty, more dispositional attributes about the population in those countries.
The Role of Empathy in Mental Attribution
Directory of Open Access Journals (Sweden)
Brunsteins, Patricia
2011-05-01
Full Text Available This work examines in what extent a notion of empathy may clarify mindreading’s debate. Taking into account an interdisciplinary and integrative notion of empathy, compatibility with mental attribution strategies both mental simulation and theory-theory, in non pure versions, is evaluated. Firstly, new empirical research is supposed to contribute strengthening an integrative empathy instead of theory-theory or mental simulation `s points of view. Secondly, new empirical research will bring better tools to distinguish between empathy and simulation. Consequently, the relationship between empathy and mental attribution theories may be better delimited and a full mental attribution theory may possibly be proposed.
Fradkin, Chris; Wallander, Jan L; Elliott, Marc N; Cuccaro, Paula; Schuster, Mark A
2016-08-01
This study examined whether daily or almost daily lower-intensity physical activity was associated with reduced obesity, among 4824 African American, Hispanic, and White youth assessed in fifth and seventh grades. Regular lower-intensity physical activity was associated with reduced obesity only among Hispanic and White males and only in seventh grade, and not among youth in fifth grade, females, or African American males or females. Findings from this study suggest that the reduced obesity risk generally attributed to physical activity may not be consistent across racial/ethnic and gender groups of early adolescents. © The Author(s) 2014.
TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY
International Nuclear Information System (INIS)
Crotts, Arlin P. S.
2009-01-01
Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: ∼50% of reports originate from near Aristarchus, ∼16% from Plato, ∼6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that ∼80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.
Elementary Particle Spectroscopy in Regular Solid Rewrite
International Nuclear Information System (INIS)
Trell, Erik
2008-01-01
The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it ''is the likely keystone of a fundamental computational foundation'' also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)xO(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each
Regularity in an environment produces an internal torque pattern for biped balance control.
Ito, Satoshi; Kawasaki, Haruhisa
2005-04-01
In this paper, we present a control method for achieving biped static balance under unknown periodic external forces whose periods are only known. In order to maintain static balance adaptively in an uncertain environment, it is essential to have information on the ground reaction forces. However, when the biped is exposed to a steady environment that provides an external force periodically, uncertain factors on the regularity with respect to a steady environment are gradually clarified using learning process, and finally a torque pattern for balancing motion is acquired. Consequently, static balance is maintained without feedback from ground reaction forces and achieved in a feedforward manner.
Returning Special Education Students to Regular Classrooms: Externalities on Peers’ Outcomes
DEFF Research Database (Denmark)
Rangvid, Beatrice Schindler
Policy reforms to boost full inclusion and conventional return flows send students with special educational needs (SEN) from segregated settings to regular classrooms. Using full population micro data from Denmark, I investigate whether becoming exposed to returning SEN students affects...... on test score gains of moderate size (-0.036 SD), while no significant effect is found in non-reform years. The results are robust to sensitivity checks. The negative exposure effect is significant only for boys, but does not differ by parental education or grade-level....
Assessing students' beliefs, emotions and causal attribution ...
African Journals Online (AJOL)
Keywords: academic emotion; belief; causal attribution; statistical validation; students' conceptions of learning ... Sadi & Lee, 2015), through their effect on motivation and learning strategies .... to understand why they may or may not be doing.
Calibration of Seismic Attributes for Reservoir Characterization
Energy Technology Data Exchange (ETDEWEB)
Pennington, Wayne D.
2002-05-29
This project is intended to enhance the ability to use seismic data for the determination of rock and fluid properties through an improved understanding of the physics underlying the relationships between seismic attributes and formation.
Object attributes combine additively in visual search.
Pramod, R T; Arun, S P
2016-01-01
We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes.
Design-Build Partnership Attributes Survey Analysis
National Research Council Canada - National Science Library
Pyle, Raymond
1998-01-01
Two basic hypotheses were investigated: 1. Finding these attributes for success for a design-build partnership may be accomplished by transferring concepts and ideas from business research on partnership formation. 2...
Attributable causes of colorectal cancer in China
Gu, Meng-Jia; Huang, Qiu-Chi; Bao, Cheng-Zhen; Li, Ying-Jun; Li, Xiao-Qin; Ye, Ding; Ye, Zhen-Hua; Chen, Kun; Wang, Jian-Bing
2018-01-01
Background Colorectal cancer is the 4th common cancer in China. Most colorectal cancers are due to modifiable lifestyle factors, but few studies have provided a systematic evidence-based assessment of the burden of colorectal cancer incidence and mortality attributable to the known risk factors in China. Methods We estimated the population attributable faction (PAF) for each selected risk factor in China, based on the prevalence of exposure around 2000 and relative risks from cohort studies a...
Object attributes combine additively in visual search
Pramod, R. T.; Arun, S. P.
2016-01-01
We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in in...
Corporate apologia and the attribution of guilt
Bülow-Møller, Anne Marie
2010-01-01
This paper argues that in the difficult disciplines of crisis communication and image restoration, attribution theory has explanatory value. Corporate apologia – the explanations that an organisation offers after an attack on – differs with the type of crisis it is designed to diffuse, and if the crisis concerns legitimacy, the art is to shift the public attribution of guilt or responsibility. The case of Arla vs Hirtshals is used to demonstrate how a concerted effort in impression management...
Language Learner Beliefs from an Attributional Perspective
Gabillon, Zehra
2013-01-01
International audience; This qualitative study, aimed to analyze eight French-speaking learners' beliefs about English and English language learning. The data were obtained via semi-structured interviews. The study drew on Weiner's attribution theory of achievement motivation and Bandura's self-efficacy theory. The novelty about this research is the employment of an attributional analysis framework to study and explain the learners' stated beliefs about English and English language learning.
Causal Attributions for Poverty in Developing Countries
José Juan Vázquez; Sonia Panadero
2009-01-01
This paper analyzes attributional differences about causes of poverty in the less developed countries, among Nicaraguan ("actors") and Spanish ("observers") undergraduates. A self–applied questionnaire was used. It included socio–demographic questions and an adaptation of the "Causes of Third World Poverty Questionnaire" (CTWPQ). Results show agreement between Spanish and Nicaraguan in attributions about the main causes of poverty in the less developed countries, although there are difference...
Improving the Accuracy of Attribute Extraction using the Relatedness between Attribute Values
Bollegala, Danushka; Tani, Naoki; Ishizuka, Mitsuru
Extracting attribute-values related to entities from web texts is an important step in numerous web related tasks such as information retrieval, information extraction, and entity disambiguation (namesake disambiguation). For example, for a search query that contains a personal name, we can not only return documents that contain that personal name, but if we have attribute-values such as the organization for which that person works, we can also suggest documents that contain information related to that organization, thereby improving the user's search experience. Despite numerous potential applications of attribute extraction, it remains a challenging task due to the inherent noise in web data -- often a single web page contains multiple entities and attributes. We propose a graph-based approach to select the correct attribute-values from a set of candidate attribute-values extracted for a particular entity. First, we build an undirected weighted graph in which, attribute-values are represented by nodes, and the edge that connects two nodes in the graph represents the degree of relatedness between the corresponding attribute-values. Next, we find the maximum spanning tree of this graph that connects exactly one attribute-value for each attribute-type. The proposed method outperforms previously proposed attribute extraction methods on a dataset that contains 5000 web pages.
Mikulincer, M
1986-12-01
Following the learned helplessness paradigm, I assessed in this study the effects of global and specific attributions for failure on the generalization of performance deficits in a dissimilar situation. Helplessness training consisted of experience with noncontingent failures on four cognitive discrimination problems attributed to either global or specific causes. Experiment 1 found that performance in a dissimilar situation was impaired following exposure to globally attributed failure. Experiment 2 examined the behavioral effects of the interaction between stable and global attributions of failure. Exposure to unsolvable problems resulted in reduced performance in a dissimilar situation only when failure was attributed to global and stable causes. Finally, Experiment 3 found that learned helplessness deficits were a product of the interaction of global and internal attribution. Performance deficits following unsolvable problems were recorded when failure was attributed to global and internal causes. Results were discussed in terms of the reformulated learned helplessness model.
Attribution bias and social anxiety in schizophrenia
Directory of Open Access Journals (Sweden)
Amelie M. Achim
2016-06-01
Full Text Available Studies on attribution biases in schizophrenia have produced mixed results, whereas such biases have been more consistently reported in people with anxiety disorders. Anxiety comorbidities are frequent in schizophrenia, in particular social anxiety disorder, which could influence their patterns of attribution biases. The objective of the present study was thus to determine if individuals with schizophrenia and a comorbid social anxiety disorder (SZ+ show distinct attribution biases as compared with individuals with schizophrenia without social anxiety (SZ− and healthy controls. Attribution biases were assessed with the Internal, Personal, and Situational Attributions Questionnaire in 41 individual with schizophrenia and 41 healthy controls. Results revealed the lack of the normal externalizing bias in SZ+, whereas SZ− did not significantly differ from healthy controls on this dimension. The personalizing bias was not influenced by social anxiety but was in contrast linked with delusions, with a greater personalizing bias in individuals with current delusions. Future studies on attribution biases in schizophrenia should carefully document symptom presentation, including social anxiety.
Noncognitive Attributes in Physician Assistant Education.
Brenneman, Anthony E; Goldgar, Constance; Hills, Karen J; Snyder, Jennifer H; VanderMeulen, Stephane P; Lane, Steven
2018-03-01
Physician assistant (PA) admissions processes have typically given more weight to cognitive attributes than to noncognitive ones, both because a high level of cognitive ability is needed for a career in medicine and because cognitive factors are easier to measure. However, there is a growing consensus across the health professions that noncognitive attributes such as emotional intelligence, empathy, and professionalism are important for success in clinical practice and optimal care of patients. There is also some evidence that a move toward more holistic admissions practices, including evaluation of noncognitive attributes, can have a positive effect on diversity. The need for these noncognitive attributes in clinicians is being reinforced by changes in the US health care system, including shifting patient demographics and a growing emphasis on team-based care and patient satisfaction, and the need for clinicians to help patients interpret complex medical information. The 2016 Physician Assistant Education Association Stakeholder Summit revealed certain behavioral and affective qualities that employers of PAs value and sometimes find lacking in new graduates. Although there are still gaps in the evidence base, some tools and technologies currently exist to more accurately measure noncognitive variables. We propose some possible strategies and tools that PA programs can use to formalize the way they select for noncognitive attributes. Since PA programs have, on average, only 27 months to educate students, programs may need to focus more resources on selecting for these attributes than teaching them.
Use of seismic attributes for sediment classification
Directory of Open Access Journals (Sweden)
Fabio Radomille Santana
2015-04-01
Full Text Available A study to understand the relationships between seismic attributes extracted from 2D high-resolution seismic data and the seafloor's sediments of the surveyed area. As seismic attributes are features highly influenced by the medium through which the seismic waves are propagated, the authors can assume that it would be possible to characterise the geological nature of the seafloor by using these attributes. Herein, a survey was performed on the continental margin of the South Shetland Islands in Antarctica, where both 2D high-resolution seismic data and sediment gravity cores samples were simultaneously acquired. A computational script was written to extract the seismic attributes from the data, which have been statistically analysed with clustering analyses, such as principal components analysis, dendrograms and k-means classification. The extracted seismic attributes are the amplitude, the instantaneous phase, the instantaneous frequency, the envelope, the time derivative of the envelope, the second derivative of the envelope and the acceleration of phase. Statistical evaluation showed that geological classification of the seafloor's sediments is possible by associating these attributes according to their coherence. The methodologies here developed seem to be appropriate for glacio-marine environment and coarse-to-medium silt sediment found in the study area and may be applied to other regions in the same geological conditions.
Regularization of plurisubharmonic functions with a net of good points
Li, Long
2017-01-01
The purpose of this article is to present a new regularization technique of quasi-plurisubharmoinc functions on a compact Kaehler manifold. The idea is to regularize the function on local coordinate balls first, and then glue each piece together. Therefore, all the higher order terms in the complex Hessian of this regularization vanish at the center of each coordinate ball, and all the centers build a delta-net of the manifold eventually.
International Nuclear Information System (INIS)
Keller, Kai Johannes
2010-04-01
The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Keller, Kai Johannes
2010-04-15
The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
DEFF Research Database (Denmark)
Nguyen, Thong Tien
2011-01-01
Experimental designs are required in widely used techniques in marketing research, especially for preference-based conjoint analysis and discrete-choice studies. Ideally, marketing researchers prefer orthogonal designs because this technique could give uncorrelated parameter estimates. However, o...... for implementing designs that is efficient enough to estimate model with N brands, each brand have K attributes, and brand attribute has specific levels. The paper also illustrates an example in food consumption study.......Experimental designs are required in widely used techniques in marketing research, especially for preference-based conjoint analysis and discrete-choice studies. Ideally, marketing researchers prefer orthogonal designs because this technique could give uncorrelated parameter estimates. However......, orthogonal design is not available for every situation. Instead, efficient design based on computerized design algorithm is always available. This paper presents the method of efficient design for estimating brand models having attribute and availability cross effects. The paper gives a framework...
Clinical findings on in utero exposed microcephalic children
Energy Technology Data Exchange (ETDEWEB)
Tabuchi, Akira; Hirai, Tsuyoshi; Nakagawa, Shigeru; Shimada, Katsunobu; Fujito, Junro
1966-12-24
Since animal experiments have shown that microcephaly is induced by fetal exposure to radiation and microcephaly has been found in children of mothers exposed to x-ray therapy during pregnancy (Murphy et al), the main cause of microcephaly in children exposed in utero to the A-bomb is considered to be ionizing radiation. Wood et al reported the increased incidence of microcephaly and mental retardation in children exposed in utero at proximal distances which they felt could not be attributed to any other known variable. ABCC has recently concluded that the effect of in utero exposure is primarily due to the immediate effect of radiation upon the fetuses although in A-bomb exposure the physical injury to the mother due to the A-bomb cannot be completely ignored. Our survey likewise revealed an increase of microcephaly in children exposed early in pregnancy at less than 15 weeks at closer distances than 1500 m. Thus, we presume that A-bomb radiation increases the incidence of microcephaly. 16 references, 8 tables.
Main attributes influencing spent nuclear fuel management
International Nuclear Information System (INIS)
Andreescu, N.; Ohai, D.
1997-01-01
All activities regarding nuclear fuel, following its discharge from the NPP, constitute the spent fuel management and are grouped in two possible back end variants, namely reprocessing (including HLW vitrification and geological disposal) and direct disposal of spent fuel. In order to select the appropriate variant it is necessary to analyse the aggregate fulfillment of the imposed requirements, particularly of the derived attributes, defined as distinguishing characteristics of the factors used in the decision making process. The main identified attributes are the following: - environmental impact, - availability of suitable sites, - non-proliferation degree, -strategy of energy, - technological complexity and technical maturity, -possible further technical improvements, - size of nuclear programme, - total costs, - public acceptance, - peculiarity of CANDU fuel. The significance of the attributes in the Romanian case, taking into consideration the present situation, as a low scenario and a high scenario corresponding to an important development of the nuclear power, after the year 2010, is presented. According to their importance the ranking of attributes is proposed . Subsequently, the ranking could be used for adequate weighing of attributes in order to realize a multi-criteria analysis and a relevant comparison of back end variants. (authors)
Extreme Weather Events and Climate Change Attribution
Energy Technology Data Exchange (ETDEWEB)
Thomas, Katherine [National Academy of Sciences, Washington, DC (United States)
2016-03-31
A report from the National Academies of Sciences, Engineering, and Medicine concludes it is now possible to estimate the influence of climate change on some types of extreme events. The science of extreme event attribution has advanced rapidly in recent years, giving new insight to the ways that human-caused climate change can influence the magnitude or frequency of some extreme weather events. This report examines the current state of science of extreme weather attribution, and identifies ways to move the science forward to improve attribution capabilities. Confidence is strongest in attributing types of extreme events that are influenced by climate change through a well-understood physical mechanism, such as, the more frequent heat waves that are closely connected to human-caused global temperature increases, the report finds. Confidence is lower for other types of events, such as hurricanes, whose relationship to climate change is more complex and less understood at present. For any extreme event, the results of attribution studies hinge on how questions about the event's causes are posed, and on the data, modeling approaches, and statistical tools chosen for the analysis.
Attributability of health effects at low radiation doses
International Nuclear Information System (INIS)
Gonzalez, Abel
2008-01-01
Full text: A controversy still persists on whether health effects can be alleged from radiation exposure situations involving low radiation doses (e.g. below the international dose limits for the public). Arguments have evolved around the validity of the dose-response representation that is internationally used for radiation protection purposes, namely the so-called linear-non-threshold (LNT) model. The debate has been masked by the intrinsic randomness of radiation interaction at the cellular level and also by gaps in the relevant scientific knowledge on the development and expression of health effects. There has also been a vague use, abuse, and misuse of radiation-related risk concepts and quantities and their associated uncertainties. As a result, there is some ambiguity in the interpretation of the phenomena and a general lack of awareness of the implications for a number of risk-causation qualities, namely its attributes and characteristics. In particular, the LNT model has been used not only for protection purposes but also for blindly attributing actual effects to specific exposure situations. The latter has been discouraged as being a misuse of the model, but the supposed incorrectness has not been clearly proven. The paper will endeavour to demonstrate unambiguously the following thesis in relation to health effects due to low radiation doses: 1) Their existence is highly plausible. A number of epidemiological statistical assessments of sufficiently large exposed populations show that, under certain conditions, the prevalence of the effects increases with dose. From these assessments, it can be hypothesized that the occurrence of the effects at any dose, however small, appears decidedly worthy of belief. While strictly the evidence does not allow to conclude that a threshold dose level does not exist either. In fact, a formal quantitative uncertainty analysis, combining the different uncertain components of estimated radiation-related risk, with and
Attributability of Health Effects at Low Radiation Doses
International Nuclear Information System (INIS)
Gonzalez, A.J.
2011-01-01
Full text: A controversy still persists on whether health effects can be alleged from radiation exposure situations involving low radiation doses (e.g. below the international dose limits for the public). Arguments have evolved around the validity of the dose response representation that is internationally used for radiation protection purposes, namely the so-called linear-non-threshold (LNT) model. The debate has been masked by the intrinsic randomness of radiation interaction at the cellular level and also by gaps in the relevant scientific knowledge on the development and expression of health effects. There has also been a vague use, abuse, and misuse of radiation-related risk concepts and quantities and their associated uncertainties. As a result, there is some ambiguity in the interpretation of the phenomena and a general lack of awareness of the implications for a number of risk-causation qualities, namely its attributes and characteristics. In particular, the LNT model has been used not only for protection purposes but also for blindly attributing actual effects to specific exposure situations. The latter has been discouraged as being a misuse of the model, but the supposed incorrectness has not been clearly proven. The paper will endeavour to demonstrate unambiguously the following thesis in relation to health effects due to low radiation doses: (i) Their existence is highly plausible. A number of epidemiological statistical assessments of sufficiently large exposed populations show that, under certain conditions, the prevalence of the effects increases with dose. From these assessments, it can be hypothesized that the occurrence of the effects at any dose, however small, appears decidedly worthy of belief. While strictly the evidence does not allow to conclude that a threshold dose level does not exist either In fact, a formal quantitative uncertainty analysis, combining the different uncertain components of estimated radiation-related risk, with and
Regular Breakfast and Blood Lead Levels among Preschool Children
Directory of Open Access Journals (Sweden)
Needleman Herbert
2011-04-01
Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.
Searchable attribute-based encryption scheme with attribute revocation in cloud storage.
Wang, Shangping; Zhao, Duqiao; Zhang, Yaling
2017-01-01
Attribute based encryption (ABE) is a good way to achieve flexible and secure access control to data, and attribute revocation is the extension of the attribute-based encryption, and the keyword search is an indispensable part for cloud storage. The combination of both has an important application in the cloud storage. In this paper, we construct a searchable attribute-based encryption scheme with attribute revocation in cloud storage, the keyword search in our scheme is attribute based with access control, when the search succeeds, the cloud server returns the corresponding cipher text to user and the user can decrypt the cipher text definitely. Besides, our scheme supports multiple keywords search, which makes the scheme more practical. Under the assumption of decisional bilinear Diffie-Hellman exponent (q-BDHE) and decisional Diffie-Hellman (DDH) in the selective security model, we prove that our scheme is secure.
Homogeneity of Prototypical Attributes in Soccer Teams
Directory of Open Access Journals (Sweden)
Christian Zepp
2015-09-01
Full Text Available Research indicates that the homogeneous perception of prototypical attributes influences several intragroup processes. The aim of the present study was to describe the homogeneous perception of the prototype and to identify specific prototypical subcategories, which are perceived as homogeneous within sport teams. The sample consists of N = 20 soccer teams with a total of N = 278 athletes (age M = 23.5 years, SD = 5.0 years. The results reveal that subcategories describing the cohesiveness of the team and motivational attributes are mentioned homogeneously within sport teams. In addition, gender, identification, team size, and the championship ranking significantly correlate with the homogeneous perception of prototypical attributes. The results are discussed on the basis of theoretical and practical implications.
Controlling attribute effect in linear regression
Calders, Toon; Karim, Asim A.; Kamiran, Faisal; Ali, Wasif Mohammad; Zhang, Xiangliang
2013-01-01
In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE.
Controlling attribute effect in linear regression
Calders, Toon
2013-12-01
In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE.
Corporate apologia and the attribution of guilt
DEFF Research Database (Denmark)
Bülow-Møller, Anne Marie
This paper argues that in the difficult disciplines of crisis communication and image restoration, attribution theory has explanatory value. Corporate apologia - the explanations that an organisation offers after an attack - differs with the type of crisis it is designed to diffuse, and if the cr......, and if the crisis concerns legitimacy, the art is to shift the public attribution of guilt or responsibility. The case of Arla vs Hirtshals is used to demonstrate how a concerted effort in impression management succeeded in just such a shift.......This paper argues that in the difficult disciplines of crisis communication and image restoration, attribution theory has explanatory value. Corporate apologia - the explanations that an organisation offers after an attack - differs with the type of crisis it is designed to diffuse...
Implicit learning of non-linguistic and linguistic regularities in children with dyslexia.
Nigro, Luciana; Jiménez-Fernández, Gracia; Simpson, Ian C; Defior, Sylvia
2016-07-01
One of the hallmarks of dyslexia is the failure to automatise written patterns despite repeated exposure to print. Although many explanations have been proposed to explain this problem, researchers have recently begun to explore the possibility that an underlying implicit learning deficit may play a role in dyslexia. This hypothesis has been investigated through non-linguistic tasks exploring implicit learning in a general domain. In this study, we examined the abilities of children with dyslexia to implicitly acquire positional regularities embedded in both non-linguistic and linguistic stimuli. In experiment 1, 42 children (21 with dyslexia and 21 typically developing) were exposed to rule-governed shape sequences; whereas in experiment 2, a new group of 42 children were exposed to rule-governed letter strings. Implicit learning was assessed in both experiments via a forced-choice task. Experiments 1 and 2 showed a similar pattern of results. ANOVA analyses revealed no significant differences between the dyslexic and the typically developing group, indicating that children with dyslexia are not impaired in the acquisition of simple positional regularities, regardless of the nature of the stimuli. However, within group t-tests suggested that children from the dyslexic group could not transfer the underlying positional rules to novel instances as efficiently as typically developing children.
Attribute And-Or Grammar for Joint Parsing of Human Pose, Parts and Attributes.
Park, Seyoung; Nie, Xiaohan; Zhu, Song-Chun
2017-07-25
This paper presents an attribute and-or grammar (A-AOG) model for jointly inferring human body pose and human attributes in a parse graph with attributes augmented to nodes in the hierarchical representation. In contrast to other popular methods in the current literature that train separate classifiers for poses and individual attributes, our method explicitly represents the decomposition and articulation of body parts, and account for the correlations between poses and attributes. The A-AOG model is an amalgamation of three traditional grammar formulations: (i)Phrase structure grammar representing the hierarchical decomposition of the human body from whole to parts; (ii)Dependency grammar modeling the geometric articulation by a kinematic graph of the body pose; and (iii)Attribute grammar accounting for the compatibility relations between different parts in the hierarchy so that their appearances follow a consistent style. The parse graph outputs human detection, pose estimation, and attribute prediction simultaneously, which are intuitive and interpretable. We conduct experiments on two tasks on two datasets, and experimental results demonstrate the advantage of joint modeling in comparison with computing poses and attributes independently. Furthermore, our model obtains better performance over existing methods for both pose estimation and attribute prediction tasks.
Application of L1/2 regularization logistic method in heart disease diagnosis.
Zhang, Bowen; Chai, Hua; Yang, Ziyi; Liang, Yong; Chu, Gejin; Liu, Xiaoying
2014-01-01
Heart disease has become the number one killer of human health, and its diagnosis depends on many features, such as age, blood pressure, heart rate and other dozens of physiological indicators. Although there are so many risk factors, doctors usually diagnose the disease depending on their intuition and experience, which requires a lot of knowledge and experience for correct determination. To find the hidden medical information in the existing clinical data is a noticeable and powerful approach in the study of heart disease diagnosis. In this paper, sparse logistic regression method is introduced to detect the key risk factors using L(1/2) regularization on the real heart disease data. Experimental results show that the sparse logistic L(1/2) regularization method achieves fewer but informative key features than Lasso, SCAD, MCP and Elastic net regularization approaches. Simultaneously, the proposed method can cut down the computational complexity, save cost and time to undergo medical tests and checkups, reduce the number of attributes needed to be taken from patients.
Chimeric mitochondrial peptides from contiguous regular and swinger RNA.
Seligmann, Hervé
2016-01-01
Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.
Tur\\'an type inequalities for regular Coulomb wave functions
Baricz, Árpád
2015-01-01
Tur\\'an, Mitrinovi\\'c-Adamovi\\'c and Wilker type inequalities are deduced for regular Coulomb wave functions. The proofs are based on a Mittag-Leffler expansion for the regular Coulomb wave function, which may be of independent interest. Moreover, some complete monotonicity results concerning the Coulomb zeta functions and some interlacing properties of the zeros of Coulomb wave functions are given.
Regularization and Complexity Control in Feed-forward Networks
Bishop, C. M.
1995-01-01
In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.
Optimal Embeddings of Distance Regular Graphs into Euclidean Spaces
F. Vallentin (Frank)
2008-01-01
htmlabstractIn this paper we give a lower bound for the least distortion embedding of a distance regular graph into Euclidean space. We use the lower bound for finding the least distortion for Hamming graphs, Johnson graphs, and all strongly regular graphs. Our technique involves semidefinite
Degree-regular triangulations of torus and Klein bottle
Indian Academy of Sciences (India)
Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.
Adaptive Regularization of Neural Networks Using Conjugate Gradient
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
1998-01-01
Andersen et al. (1997) and Larsen et al. (1996, 1997) suggested a regularization scheme which iteratively adapts regularization parameters by minimizing validation error using simple gradient descent. In this contribution we present an improved algorithm based on the conjugate gradient technique........ Numerical experiments with feedforward neural networks successfully demonstrate improved generalization ability and lower computational cost...
Strictly-regular number system and data structures
DEFF Research Database (Denmark)
Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki
2010-01-01
We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...
Inclusion Professional Development Model and Regular Middle School Educators
Royster, Otelia; Reglin, Gary L.; Losike-Sedimo, Nonofo
2014-01-01
The purpose of this study was to determine the impact of a professional development model on regular education middle school teachers' knowledge of best practices for teaching inclusive classes and attitudes toward teaching these classes. There were 19 regular education teachers who taught the core subjects. Findings for Research Question 1…
The equivalence problem for LL- and LR-regular grammars
Nijholt, Antinus; Gecsec, F.
It will be shown that the equivalence problem for LL-regular grammars is decidable. Apart from extending the known result for LL(k) grammar equivalence to LLregular grammar equivalence, we obtain an alternative proof of the decidability of LL(k) equivalence. The equivalence prob]em for LL-regular
The Effects of Regular Exercise on the Physical Fitness Levels
Kirandi, Ozlem
2016-01-01
The purpose of the present research is investigating the effects of regular exercise on the physical fitness levels among sedentary individuals. The total of 65 sedentary male individuals between the ages of 19-45, who had never exercises regularly in their lives, participated in the present research. Of these participants, 35 wanted to be…
Regular perturbations in a vector space with indefinite metric
International Nuclear Information System (INIS)
Chiang, C.C.
1975-08-01
The Klein space is discussed in connection with practical applications. Some lemmas are presented which are to be used for the discussion of regular self-adjoint operators. The criteria for the regularity of perturbed operators are given. (U.S.)
Pairing renormalization and regularization within the local density approximation
International Nuclear Information System (INIS)
Borycki, P.J.; Dobaczewski, J.; Nazarewicz, W.; Stoitsov, M.V.
2006-01-01
We discuss methods used in mean-field theories to treat pairing correlations within the local density approximation. Pairing renormalization and regularization procedures are compared in spherical and deformed nuclei. Both prescriptions give fairly similar results, although the theoretical motivation, simplicity, and stability of the regularization procedure make it a method of choice for future applications
Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears
Chen, Sau-Chin; Hu, Jon-Fan
2015-01-01
Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…
Regularity conditions of the field on a toroidal magnetic surface
International Nuclear Information System (INIS)
Bouligand, M.
1985-06-01
We show that a field B vector which is derived from an analytic canonical potential on an ordinary toroidal surface is regular on this surface when the potential satisfies an elliptic equation (owing to the conservative field) subject to certain conditions of regularity of its coefficients [fr
47 CFR 76.614 - Cable television system regular monitoring.
2010-10-01
...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... in these bands of 20 uV/m or greater at a distance of 3 meters. During regular monitoring, any leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in the...
Analysis of regularized Navier-Stokes equations, 2
Ou, Yuh-Roung; Sritharan, S. S.
1989-01-01
A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.
20 CFR 226.33 - Spouse regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Spouse regular annuity rate. 226.33 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.33 Spouse regular annuity rate. The final tier I and tier II rates, from §§ 226.30 and 226.32, are...
Attribute correlates of hospital outpatient satisfaction.
Krueckeberg, H F; Hubbert, A
1995-01-01
Customer satisfaction (patient satisfaction) with hospital outpatient or ambulatory services is an important factor in influencing patient patronage and loyalty. Based on an empirical study, this article examines the attributes of the ambulatory care experience which were significantly associated with the level of satisfaction resulting from the most recent hospital ambulatory visit. This study focuses on identifying attributes of ambulatory services. This article brings to the health care marketing literature information on ambulatory satisfaction comparable to that which has been contributed to the literature regarding satisfaction with physician and hospital experiences.
Advances in treating exposed fractures.
Nogueira Giglio, Pedro; Fogaça Cristante, Alexandre; Ricardo Pécora, José; Partezani Helito, Camilo; Lei Munhoz Lima, Ana Lucia; Dos Santos Silva, Jorge
2015-01-01
The management of exposed fractures has been discussed since ancient times and remains of great interest to present-day orthopedics and traumatology. These injuries are still a challenge. Infection and nonunion are feared complications. Aspects of the diagnosis, classification and initial management are discussed here. Early administration of antibiotics, surgical cleaning and meticulous debridement are essential. The systemic conditions of patients with multiple trauma and the local conditions of the limb affected need to be taken into consideration. Early skeletal stabilization is necessary. Definitive fixation should be considered when possible and provisional fixation methods should be used when necessary. Early closure should be the aim, and flaps can be used for this purpose.
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
Optimal behaviour can violate the principle of regularity.
Trimmer, Pete C
2013-07-22
Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.
2012-03-11
The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).
Laplacian manifold regularization method for fluorescence molecular tomography
He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei
2017-04-01
Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.
International Nuclear Information System (INIS)
Stevenson, A.P.; Tobey, R.A.
1985-01-01
Potassium ion influx was measured by monitoring 42 KCl uptake by Chinese hamster ovary (CHO) cells grown in suspension culture and exposed in the culture medium to 60-Hz electromagnetic fields up to 2.85 V/m. In the presence of the field CHO cells exhibited two components of uptake, the same as previously observed for those grown under normal conditions; both these components of influx were decreased when compared to sham-exposed cells. Although decreases were consistently observed in exposed cells when plotted as loge of uptake, the differences between the means of the calculated fluxes of exposed and sham-exposed cells were quite small (on the order of 4-7%). When standard deviations were calculated, there was no significant difference between these means; however, when time-paired uptake data were analyzed, the differences were found to be statistically significant. Cells exposed only to the magnetic field exhibited similar small decreases in influx rates when compared to sham-exposed cells, suggesting that the reduction in K+ uptake could be attributed to the magnetic field. Additionally, intracellular K+ levels were measured over a prolonged exposure period (96 h), and no apparent differences in intracellular K+ levels were observed between field-exposed and sham-exposed cultures. These results indicate that high-strength electric fields have a small effect on the rate of transport of potassium ions but no effect on long-term maintenance of intracellular K+
Growth and reproductive attributes of radionuclide phytoremediators ...
African Journals Online (AJOL)
The study reveals that growth attributes including relative growth rate, net assimilation rate, leaf are index and specific leaf area, dry matter allocated to stem and leaves and number of reproductive organs decreased with the increase of radionuclide content of the plant, while the dry matter allocated to root and reproductive ...
Development of the Attributed Dignity Scale.
Jacelon, Cynthia S; Dixon, Jane; Knafl, Kathleen A
2009-07-01
A sequential, multi-method approach to instrument development beginning with concept analysis, followed by (a) item generation from qualitative data, (b) review of items by expert and lay person panels, (c) cognitive appraisal interviews, (d) pilot testing, and (e) evaluating construct validity was used to develop a measure of attributed dignity in older adults. The resulting positively scored, 23-item scale has three dimensions: Self-Value, Behavioral Respect-Self, and Behavioral Respect-Others. Item-total correlations in the pilot study ranged from 0.39 to 0.85. Correlations between the Attributed Dignity Scale (ADS) and both Rosenberg's Self-Esteem Scale (0.17) and Crowne and Marlowe's Social Desirability Scale (0.36) were modest and in the expected direction, indicating attributed dignity is a related but independent concept. Next steps include testing the ADS with a larger sample to complete factor analysis, test-retest stability, and further study of the relationships between attributed dignity and other concepts.
On defining semantics of extended attribute grammars
DEFF Research Database (Denmark)
Madsen, Ole Lehrmann
1980-01-01
Knuth has introduced attribute grammars (AGs) as a tool to define the semanitcs of context-free languages. The use of AGs in connection with programming language definitions has mostly been to define the context-sensitive syntax of the language and to define a translation in code for a hypothetic...
Parent-Child Agreement on Attributional Beliefs.
Cashmore, Judith A.; Goodnow, Jacqueline J.
1986-01-01
Explores extent to which parents and their adolescent children agree with respect to their attributional beliefs. First-born Australian children of Anglo and Italian backgrounds and their parents ranked talent, effort, and teaching according to relative importance in the development of six skill areas. Variations in patterns of attributions…
Consumer preferences for pork supply chain attributes
Meuwissen, M.P.M.; Lans, van der I.A.; Huirne, R.B.M.
2007-01-01
Based on an extensive customized conjoint analysis with 24 attributes of pork production, covering issues from feed to fork, we identified six consumer segments: ecologists (17%), tradition-minded consumers (17%), animal friends (16%), health-concerned consumers (18%), economists (12%) and
Source attribution of human campylobacteriosis in Denmark.
Boysen, L; Rosenquist, H; Larsson, J T; Nielsen, E M; Sørensen, G; Nordentoft, S; Hald, T
2014-08-01
SUMMARY This study assesses the contribution of different sources of human campylobacteriosis in Denmark using two different source-attribution approaches. In total, 794 non-human isolates and 406 isolates from human cases (domestic, travel related, and cases with unknown travel history) were collected. Isolates were characterized by multilocus sequence typing, flaA typing and susceptibility to antibiotics. Both models used indicate that the major burden of human campylobacteriosis in Denmark originates from the domestic broiler chicken reservoir. The second most important reservoir was found to be cattle. The Asymmetric Island model attributed 52% [95% credibility interval (CrI) 37-67] to Danish chicken, 17% (95% CrI 3-33) to imported chicken, and 17% (95% CrI 7-28) to cattle. Similarly, the Campylobacter source-attribution model apportioned 38% (95% CrI 28-47) to Danish chicken, 14% (95% CrI 10-18) to imported chicken, and 16% (95% CrI 7-25) to cattle. The addition of flaA type as an extra discriminatory typing parameter did not change the attribution of cases markedly.
1 Evaluating Biophysical Attributes of Environmentally Degraded ...
African Journals Online (AJOL)
`123456789jkl''''#
Ethiopian Journal of Environmental Studies and Management Vol.4 No. 1 2011. 1 Department of .... land cover types and other physical attributes. (soils and landform ..... Natural water bodies (Rivers). Figure 4: .... permanent or ephemeral rivers. .... evaluating land use/land cover change using participatory ... First Edition.
Predictive user modeling with actionable attributes
Zliobaite, I.; Pechenizkiy, M.
2013-01-01
Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target
Attributional and consequential LCA of milk production
Thomassen, M.A.; Dalgaard, P.; Heijungs, R.; Boer, de I.J.M.
2008-01-01
Background, aim and scope Different ways of performing a life cycle assessment (LCA) are used to assess the environmental burden of milk production. A strong connection exists between the choice between attributional LCA (ALCA) and consequential LCA (CLCA) and the choice of how to handle
Attributional and consequential LCA of milk production
Thomassen, Marlies A.; Dalgaard, Randi; Heijungs, Reinout; De Boer, Imke
Background, aim and scope: Different ways of performing a life cycle assessment (LCA) are used to assess the environmental burden of milk production. A strong connection exists between the choice between attributional LCA (ALCA) and consequential LCA (CLCA) and the choice of how to handle
Beyond OCR: Handwritten manuscript attribute understanding
He, Sheng
2017-01-01
Knowing the author, date and location of handwritten historical documents is very important for historians to completely understand and reveal the valuable information they contain. In this thesis, three attributes, such as writer, date and geographical location, are studied by analyzing the
The Personal Attributes Questionnaire: A Conceptual Analysis.
Ozer, Daniel
The rich complexity of the concepts of masculinity and femininity has been reflected in personality measures in at least two different ways: by employing a variety of subscales with comparatively homogeneous items or by using a single scale with comparatively heterogeneous items. The Personal Attributes Questionnaire (PAQ) was the subject of an…
FAT-miner: mining frequent attribute trees
Knijf, de J.; Cho, Y.; Wainwright, R.L.; Haddad, H.; Shin, S.Y.; Koo, Y.W.
2007-01-01
Data that can conceptually be viewed as tree structures abounds in domains such as bio-informatics, web logs, XML databases and multi-relational databases. Besides structural information such as nodes and edges, tree structured data also often contains attributes, that represent properties of nodes.
Memory for Recently Accessed Visual Attributes
Jiang, Yuhong V.; Shupe, Joshua M.; Swallow, Khena M.; Tan, Deborah H.
2016-01-01
Recent reports have suggested that the attended features of an item may be rapidly forgotten once they are no longer relevant for an ongoing task (attribute amnesia). This finding relies on a surprise memory procedure that places high demands on declarative memory. We used intertrial priming to examine whether the representation of an item's…
Distress attributed to negative symptoms in schizophrenia
Selten, JP; Wiersma, D; van den Bosch, RJ
2000-01-01
The purpose of the study was to examine (1) to which negative symptoms schizophrenia patients attribute distress and (2) whether clinical variables can predict the levels of reported distress. With the help of a research assistant, 86 hospitalized patients completed a self-rating scale for negative
Attribution of Negative Intention in Williams Syndrome
Godbee, Kali; Porter, Melanie A.
2013-01-01
People with Williams syndrome (WS) are said to have sociable and extremely trusting personalities, approaching strangers without hesitation. This study investigated whether people with WS are less likely than controls to attribute negative intent to others when interpreting a series of ambiguous pictures. This may, at least partially, explain…
Discriminative power of visual attributes in dermatology
Giotis, Ioannis; Visser, Margaretha; Jonkman, Marcel; Petkov, Nicolai
Background/purpose: Visual characteristics such as color and shape of skin lesions play an important role in the diagnostic process. In this contribution, we quantify the discriminative power of such attributes using an information theoretical approach. Methods: We estimate the probability of
Identifying Key Attributes for Protein Beverages.
Oltman, A E; Lopetcharat, K; Bastian, E; Drake, M A
2015-06-01
This study identified key attributes of protein beverages and evaluated effects of priming on liking of protein beverages. An adaptive choice-based conjoint study was conducted along with Kano analysis to gain insight on protein beverage consumers (n = 432). Attributes evaluated included label claim, protein type, amount of protein, carbohydrates, sweeteners, and metabolic benefits. Utility scores for levels and importance scores for attributes were determined. Subsequently, two pairs of clear acidic whey protein beverages were manufactured that differed by age of protein source or the amount of whey protein per serving. Beverages were evaluated by 151 consumers on two occasions with or without priming statements. One priming statement declared "great flavor," the other priming statement declared 20 g protein per serving. A two way analysis of variance was applied to discern the role of each priming statement. The most important attribute for protein beverages was sweetener type, followed by amount of protein, followed by type of protein followed by label claim. Beverages with whey protein, naturally sweetened, reduced sugar and ≥15 g protein per serving were most desired. Three consumer clusters were identified, differentiated by their preferences for protein type, sweetener and amount of protein. Priming statements positively impacted concept liking (P 0.05). Consistent with trained panel profiles of increased cardboard flavor with higher protein content, consumers liked beverages with 10 g protein more than beverages with 20 g protein (6.8 compared with 5.7, P appeal. © 2015 Institute of Food Technologists®
Flexible goal attribution in early mindreading.
Michael, John; Christensen, Wayne
2016-03-01
The 2-systems theory developed by Apperly and Butterfill (2009; Butterfill & Apperly, 2013) is an influential approach to explaining the success of infants and young children on implicit false-belief tasks. There is extensive empirical and theoretical work examining many aspects of this theory, but little attention has been paid to the way in which it characterizes goal attribution. We argue here that this aspect of the theory is inadequate. Butterfill and Apperly's characterization of goal attribution is designed to show how goals could be ascribed by infants without representing them as related to other psychological states, and the minimal mindreading system is supposed to operate without employing flexible semantic-executive cognitive processes. But research on infant goal attribution reveals that infants exhibit a high degree of situational awareness that is strongly suggestive of flexible semantic-executive cognitive processing, and infants appear moreover to be sensitive to interrelations between goals, preferences, and beliefs. Further, close attention to the structure of implicit mindreading tasks--for which the theory was specifically designed--indicates that flexible goal attribution is required to succeed. We conclude by suggesting 2 approaches to resolving these problems. (c) 2016 APA, all rights reserved).
Credit in Acceptance Sampling on Attributes
Klaassen, Chris A.J.
2000-01-01
Credit is introduced in acceptance sampling on attributes and a Credit Based Acceptance sampling system is developed that is very easy to apply in practice.The credit of a producer is defined as the total number of items accepted since the last rejection.In our sampling system the sample size for a
Defining Hardwood Veneer Log Quality Attributes
Jan Wiedenbeck; Michael Wiemann; Delton Alderman; John Baumgras; William Luppold
2004-01-01
This publication provides a broad spectrum of information on the hardwood veneer industry in North America. Veneer manufacturers and their customers impose guidelines in specifying wood quality attributes that are very discriminating but poorly defined (e.g., exceptional color, texture, and/or figure characteristics). To better understand and begin to define the most...
Personal attributes influencing school burnout among graduating ...
African Journals Online (AJOL)
Questionnaires administered on participants contained scales that measured school burnout, academic self-efficacy, perception of teacher support, sex and age. The study predicted that personal attributes and demographics will significantly influence school burnout. The hypothesis was confirmed as predicted as result ...
A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.
Wang, Shangping; Ye, Jian; Zhang, Yaling
2018-01-01
Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.
Objective Self Awareness, Self-Esteem and Causal Attributions for Success and Failure.
2 (high vs. low self - esteem subjects) design was employed. Objective self -awareness was manipulated by exposing subjects to their image on a wall...The investigation has applied the theory of objective self awareness (Duval and Wicklund, 1972) to the study of causal attributions that actors make...for their past performance. A 2 (male vs. female subject) by 2 (success vs. failure) by 3 (objective self -awareness vs. control vs. time control) by
Increased gluconeogenesis in rats exposed to hyper-G stress
International Nuclear Information System (INIS)
Daligcon, B.C.; Oyama, J.; Hannak, K.
1985-01-01
The role of gluconeogenesis on the increase in plasma glucose and liver glycogen of rats exposed to hyper-G (radial acceleration) stress was determined. Overnight-fasted, male Sprague-Dawley rats (250-300 g) were injected i.p. with uniformly labeled 14 C lactate, alanine, or glycerol (5 μCi/rat) and immediately exposed to 3.1 G for 0.25, 0.50, and 1.0 hr. 14 C incorporation of the labeled substrates into plasma glucose and liver glycogen was measured and compared to noncentrifuged control rats injected in a similar manner. Significant increases in 14 C incorporation of all three labeled substrates into plasma glucose were observed in centrifuged rats at all exposure periods; 14 C incorporation into liver glycogen was significantly increased only at 0.50 and 1.0 hr. The i.p. administration (5 mg/100-g body wt) of 5-methoxyindole-2-carboxylic acid, a potent gluconeogenesis inhibitor, prior to centrifugation blocked the increase in plasma glucose and liver glycogen during the first hour of centrifugation. The increase in plasma glucose and liver glycogen was also abolished in adrenodemedullated rats exposed to centrifugation for 1.0 hr. Propranolol, a beta-adrenergic blocker, suppressed the increase in plasma glucose of rats exposed to centrifugation for 0.25 hr. From the results of this study, it is concluded that the initial, rapid rise in plasma glucose as well as the increase in liver glycogen of rats exposed to hyper-G stress can be attributed to an increased rate of gluconeogenesis, and that epinephrine plays a dominant role during the early stages of exposure to centrifugation. 11 references, 3 tables
Hughes, W. Jay
Questionnaire data (n = 297) examined the relationship between gender attributions of science and academic attributes for undergraduate science, mathematics, and technology majors from the perspective of gender schema theory. Female and male respondents perceived that (a) the role of scientist was sex typed as masculine, (b) their majors were more valuable for members of their gender than for those of the opposite gender, (c) their majors were more valuable for themselves than for members of their gender in general. Androgynous attributions of scientists and the value of one's major for women predicted value for oneself, major confidence, and career confidence, and masculine attributions of scientists predicted class participation for female respondents. Feminine attributions of scientists predicted graduate school intent; value for women predicted major confidence and subjective achievement, and value for men predicted value for oneself, course confidence, and career confidence for male respondents.
Three regularities of recognition memory: the role of bias.
Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok
2015-12-01
A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.
Learning regularization parameters for general-form Tikhonov
International Nuclear Information System (INIS)
Chung, Julianne; Español, Malena I
2017-01-01
Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)
Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.
Sun, Shiliang; Xie, Xijiong
2016-09-01
Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.
Poernomo, Alvin; Kang, Dae-Ki
2018-08-01
Training a deep neural network with a large number of parameters often leads to overfitting problem. Recently, Dropout has been introduced as a simple, yet effective regularization approach to combat overfitting in such models. Although Dropout has shown remarkable results on many deep neural network cases, its actual effect on CNN has not been thoroughly explored. Moreover, training a Dropout model will significantly increase the training time as it takes longer time to converge than a non-Dropout model with the same architecture. To deal with these issues, we address Biased Dropout and Crossmap Dropout, two novel approaches of Dropout extension based on the behavior of hidden units in CNN model. Biased Dropout divides the hidden units in a certain layer into two groups based on their magnitude and applies different Dropout rate to each group appropriately. Hidden units with higher activation value, which give more contributions to the network final performance, will be retained by a lower Dropout rate, while units with lower activation value will be exposed to a higher Dropout rate to compensate the previous part. The second approach is Crossmap Dropout, which is an extension of the regular Dropout in convolution layer. Each feature map in a convolution layer has a strong correlation between each other, particularly in every identical pixel location in each feature map. Crossmap Dropout tries to maintain this important correlation yet at the same time break the correlation between each adjacent pixel with respect to all feature maps by applying the same Dropout mask to all feature maps, so that all pixels or units in equivalent positions in each feature map will be either dropped or active during training. Our experiment with various benchmark datasets shows that our approaches provide better generalization than the regular Dropout. Moreover, our Biased Dropout takes faster time to converge during training phase, suggesting that assigning noise appropriately in
Shoulder injuries attributed to resistance training: a brief review.
Kolber, Morey J; Beekhuizen, Kristina S; Cheng, Ming-Shun S; Hellman, Madeleine A
2010-06-01
The popularity of resistance training (RT) is evident by the more than 45 million Americans who engage in strength training regularly. Although the health and fitness benefits ascribed to RT are generally agreed upon, participation is not without risk. Acute and chronic injuries attributed to RT have been cited in the epidemiological literature among both competitive and recreational participants. The shoulder complex in particular has been alluded to as one of the most prevalent regions of injury. The purpose of this manuscript is to present an overview of documented shoulder injuries among the RT population and where possible discern mechanisms of injury and risk factors. A literature search was conducted in the PUBMED, CINAHL, SPORTDiscus, and OVID databases to identify relevant articles for inclusion using combinations of key words: resistance training, shoulder, bodybuilding, weightlifting, shoulder injury, and shoulder disorder. The results of the review indicated that up to 36% of documented RT-related injuries and disorders occur at the shoulder complex. Trends that increased the likelihood of injury were identified and inclusive of intrinsic risk factors such as joint and muscle imbalances and extrinsic risk factors, namely, that of improper attention to exercise technique. A majority of the available research was retrospective in nature, consisting of surveys and descriptive epidemiological reports. A paucity of research was available to identify predictive variables leading to injury, suggesting the need for future prospective-based investigations.
Closedness type regularity conditions in convex optimization and beyond
Directory of Open Access Journals (Sweden)
Sorin-Mihai Grad
2016-09-01
Full Text Available The closedness type regularity conditions have proven during the last decade to be viable alternatives to their more restrictive interiority type counterparts, in both convex optimization and different areas where it was successfully applied. In this review article we de- and reconstruct some closedness type regularity conditions formulated by means of epigraphs and subdifferentials, respectively, for general optimization problems in order to stress that they arise naturally when dealing with such problems. The results are then specialized for constrained and unconstrained convex optimization problems. We also hint towards other classes of optimization problems where closedness type regularity conditions were successfully employed and discuss other possible applications of them.
Capped Lp approximations for the composite L0 regularization problem
Li, Qia; Zhang, Na
2017-01-01
The composite L0 function serves as a sparse regularizer in many applications. The algorithmic difficulty caused by the composite L0 regularization (the L0 norm composed with a linear mapping) is usually bypassed through approximating the L0 norm. We consider in this paper capped Lp approximations with $p>0$ for the composite L0 regularization problem. For each $p>0$, the capped Lp function converges to the L0 norm pointwisely as the approximation parameter tends to infinity. We point out tha...
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
Fluctuations of quantum fields via zeta function regularization
International Nuclear Information System (INIS)
Cognola, Guido; Zerbini, Sergio; Elizalde, Emilio
2002-01-01
Explicit expressions for the expectation values and the variances of some observables, which are bilinear quantities in the quantum fields on a D-dimensional manifold, are derived making use of zeta function regularization. It is found that the variance, related to the second functional variation of the effective action, requires a further regularization and that the relative regularized variance turns out to be 2/N, where N is the number of the fields, thus being independent of the dimension D. Some illustrating examples are worked through. The issue of the stress tensor is also briefly addressed
Low-Rank Matrix Factorization With Adaptive Graph Regularizer.
Lu, Gui-Fu; Wang, Yong; Zou, Jian
2016-05-01
In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.
Regularization theory for ill-posed problems selected topics
Lu, Shuai
2013-01-01
Thismonograph is a valuable contribution to thehighly topical and extremly productive field ofregularisationmethods for inverse and ill-posed problems. The author is an internationally outstanding and acceptedmathematicianin this field. In his book he offers a well-balanced mixtureof basic and innovative aspects.He demonstrates new,differentiatedviewpoints, and important examples for applications. The bookdemontrates thecurrent developments inthe field of regularization theory,such as multiparameter regularization and regularization in learning theory. The book is written for graduate and PhDs
Concurrent computation of attribute filters on shared memory parallel machines
Wilkinson, Michael H.F.; Gao, Hui; Hesselink, Wim H.; Jonker, Jan-Eppo; Meijster, Arnold
2008-01-01
Morphological attribute filters have not previously been parallelized mainly because they are both global and nonseparable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings, and thickenings,
Chiou, Wen-Bin
2007-06-01
Besides flight safety, complaint handling plays a crucial role in airline service. Based upon Kelley's attribution theory, in the present study customers' attributions were examined under different conditions of complaint handling by the airlines. There were 531 passengers (216 women; ages 21 to 63 years, M = 41.5, SD = 11.1) with experiences of customer complaints who were recruited while awaiting boarding. Participants received one hypothetical scenario of three attributional conditions about complaint handling and then reported their attributional judgments. The findings indicated that the passengers were most likely to attribute the company's complaint handling to unconditional compliance when the airline company reacted to customer complaints under low distinctiveness, high consistency, and when consensus among the airlines was low. On the other hand, most passengers attributed the company's complaint handling to conditional compliance under the conditions in which distinctiveness, consistency, and consensus were all high. The results provide further insights into how different policies of complaint management affect customers' attributions. Future directions and managerial implications are also discussed.
Directory of Open Access Journals (Sweden)
Aman Faturachman
2013-04-01
Full Text Available This research purposes to know how perception auditors about determining factors of audit fee based on Client Attributes, Auditor Attributes, and Engagement Attributes at The Public Accountant Firm residing in Bandung. In this research, the indicator that is used to characterize the Client Attributes are size, complexity, inherent risk, profitability, leverage and liquidity, and industry. While the indicator to characterize the Auditor Attributes are auditor’s specialization, audit tenure, and location. And the indicators to characterize the Engagement Attributes are audit problems, audit report lag, busy season, and number of reports. The Method that is used in this research is a descriptive method. The population in this research is a public accountant in Bandung. Based on sampling techniques that saturated and qualified then it take about 11 offices of public accountant. SmartPLS ver 2.0 M3 are used as a Statistical analysis. The result of this research with count the loading factor and bootstrapping method are, the first one that the perception of the auditor based on client attributes of audit fee determinants from which is very important to not important is size, complexity, profitability, inherent risk, industry, and leverage & liquidty, the second states that perception based on auditor attributes audit fee determinants from which is very important to not important is audit tenure, location, and specialization. And the third states that the perception of auditor engagement attributes based determinants of audit fee which is very important to not important audit report lag, busy season, audit problems and number of reports.
Constraint-based Attribute and Interval Planning
Jonsson, Ari; Frank, Jeremy
2013-01-01
In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.
Transfer and use of generation attributes
International Nuclear Information System (INIS)
Jansen, J.C.
2005-06-01
Key issues regarding generation attribute accounting are being considered in the U.S., following similar discussions related to their use in Europe. Strict substantiation, environmental additionality and consistency criteria should be enshrined in nascent legislation being developed regarding claims over (electricity) generation attributes, and suitable standardization of disclosure labels should be mandated for electricity offerings to end users. In this way, the issue of multiple counting can be addressed effectively, consumer protection in the electricity market reliably ensured, and confidence in the integrity of green power products enhanced. For the time being, non-hydro renewable electricity cannot gain substantial market share without specific policy stimulation. Yet, both in Europe and in the U.S., credible facilitation of the consumer's choice in the electricity market is set to unleash considerable addition
AcquisitionFootprintAttenuationDrivenbySeismicAttributes
Directory of Open Access Journals (Sweden)
Cuellar-Urbano Mayra
2014-04-01
Full Text Available Acquisition footprint, one of the major problems that PEMEX faces in seismic imaging, is noise highly correlated to the geometric array of sources and receivers used for onshore and offshore seismic acquisitions. It prevails in spite of measures taken during acquisition and data processing. This pattern, throughout the image, is easily confused with geological features and misguides seismic attribute computation. In this work, we use seismic data from PEMEX Exploración y Producción to show the conditioning process for removing random and coherent noise using linear filters. Geometric attributes used in a workflow were computed for obtaining an acquisition footprint noise model and adaptively subtract it from the seismic data.
Attributional and consequential LCA of milk production
DEFF Research Database (Denmark)
Thomassen, Marlies A; Dalgaard, Randi; Heijungs, Reinout
2008-01-01
Background, aim and scope Different ways of performing a life cycle assessment (LCA) are used to assess the environmental burden of milk production. A strong connection exists between the choice between attributional LCA (ALCA) and consequential LCA (CLCA) and the choice of how to handle co......-products. Insight is needed in the effect of choice on results of environmental analyses of agricultural products, such as milk. The main goal of this study was to demonstrate and compare ALCA and CLCA of an average conventional milk production system in The Netherlands. Materials and methods ALCA describes...... the pollution and resource flows within a chosen system attributed to the delivery of a specified amount of the functional unit. CLCA estimates how pollution and resource flows within a system change in response to a change in output of the functional unit. For an average Dutch conventional milk production...
Quality-Attribute-Based Economic Valuation of Architectural Patterns
National Research Council Canada - National Science Library
Ozkaya, Ipek; Kazman, Rick; Klein, Mark
2007-01-01
.... Architectural patterns can be used to achieve quality attribute requirements. Consequently, architectural patterns generate value based on the present and future utility of the quality attributes they achieve...
Group and Topic Discovery from Relations and Their Attributes
National Research Council Canada - National Science Library
Wang, Xuerui; Mohanty, Natasha; McCallum, Andrew
2006-01-01
The authors present a probabilistic generative model of entity relationships and their attributes that simultaneously discovers groups among the entities and topics among the corresponding textual attributes...
On the theory of drainage area for regular and non-regular points
Bonetti, S.; Bragg, A. D.; Porporato, A.
2018-03-01
The drainage area is an important, non-local property of a landscape, which controls surface and subsurface hydrological fluxes. Its role in numerous ecohydrological and geomorphological applications has given rise to several numerical methods for its computation. However, its theoretical analysis has lagged behind. Only recently, an analytical definition for the specific catchment area was proposed (Gallant & Hutchinson. 2011 Water Resour. Res. 47, W05535. (doi:10.1029/2009WR008540)), with the derivation of a differential equation whose validity is limited to regular points of the watershed. Here, we show that such a differential equation can be derived from a continuity equation (Chen et al. 2014 Geomorphology 219, 68-86. (doi:10.1016/j.geomorph.2014.04.037)) and extend the theory to critical and singular points both by applying Gauss's theorem and by means of a dynamical systems approach to define basins of attraction of local surface minima. Simple analytical examples as well as applications to more complex topographic surfaces are examined. The theoretical description of topographic features and properties, such as the drainage area, channel lines and watershed divides, can be broadly adopted to develop and test the numerical algorithms currently used in digital terrain analysis for the computation of the drainage area, as well as for the theoretical analysis of landscape evolution and stability.
Attribution Theory and Judgment under Uncertainty
1975-06-13
that every- cuy learning experiences are typically not structured to develop cognitive control. Much of the problem appears to be related to...people do and should explain past events may be found in the ruminations of historians over the state and nature of their craft (e.g.. Beard, 1935...8217 P- N- Psychology of Reasoning: Structure and Content . London: Bacsford, 19727 gcruccure - Attribution Theory 51 Wyer, R. S. CoRnitive
A calculus for attribute-based communication
DEFF Research Database (Denmark)
Alrahman, Yehia Abd; De Nicola, Rocco; Loreti, Michele
2015-01-01
The notion of attribute-based communication seems promising to model and analyse systems with huge numbers of interacting components that dynamically adjust and combine their behaviour to achieve specific goals. A basic process calculus, named AbC, is introduced that has as primitive construct...... of how well-established process calculi could be encoded into AbC is given by considering the translation into AbC of a proto-typical π-calculus process....
Valuing Attributes of Fluid Milk in Laos
Jae Won Lee; Taeyoon Kim; Viengsakoun Napasirth
2017-01-01
This study estimates the random utility function of fluid milk using 1,165 survey responses in Laos. It finds that both products’ attributes and individual characteristics affect consumers’ preference for the milk and the hypothetical brand of Laos-Korea has a potential compared to four real dairy products. Results also show that calories have a positive relationship with consumer’s preference while the price and fat content have a negative one. The decision for choosing each brand is signifi...
Consumer Preferences for Hearing Aid Attributes
Lataille, Angela T.; Buttorff, Christine; White, Sharon; Niparko, John K.
2012-01-01
Low utilization of hearing aids has drawn increased attention to the study of consumer preferences using both simple ratings (e.g., Likert scale) and conjoint analyses, but these two approaches often produce inconsistent results. The study aims to directly compare Likert scales and conjoint analysis in identifying important attributes associated with hearing aids among those with hearing loss. Seven attributes of hearing aids were identified through qualitative research: performance in quiet settings, comfort, feedback, frequency of battery replacement, purchase price, water and sweat resistance, and performance in noisy settings. The preferences of 75 outpatients with hearing loss were measured with both a 5-point Likert scale and with 8 paired-comparison conjoint tasks (the latter being analyzed using OLS [ordinary least squares] and logistic regression). Results were compared by examining implied willingness-to-pay and Pearson’s Rho. A total of 56 respondents (75%) provided complete responses. Two thirds of respondents were male, most had sensorineural hearing loss, and most were older than 50; 44% of respondents had never used a hearing aid. Both methods identified improved performance in noisy settings as the most valued attribute. Respondents were twice as likely to buy a hearing aid with better functionality in noisy environments (p < .001), and willingness to pay for this attribute ranged from US$2674 on the Likert to US$9000 in the conjoint analysis. The authors find a high level of concordance between the methods—a result that is in stark contrast with previous research. The authors conclude that their result stems from constraining the levels on the Likert scale. PMID:22514094
Attribution of polar warming to human influence
Gillett, NP; Stone, DA; Stott, PA; Nozawa, T; Karpechko, AY; Hegerl, GC; Wehner, MF; Jones, PD
2008-01-01
The polar regions have long been expected to warm strongly as a result of anthropogenic climate change, because of the positive feedbacks associated with melting ice and snow. Several studies have noted a rise in Arctic temperatures over recent decades, but have not formally attributed the changes to human influence, owing to sparse observations and large natural variability. Both warming and cooling trends have been observed in Antarctica, which the Intergovernmental Panel on Climate Change ...
Toward Deriving Software Architectures from Quality Attributes
1994-08-01
administration of Its orograms on the basis of religion creec ancestry. belief, age veteran status sexuai orientation or rn violation of federal state or Ioca...environments rely on the notion of a "tool bus" or an explicit shared repository [ Wasser - man 89] to allow easy integration of tools. 4.7 Unit...attributed parse tree and symbol table that the compiler cre- ates and annotates during its various phases. This results in a very different software
Quality Model Based on Cots Quality Attributes
Jawad Alkhateeb; Khaled Musa
2013-01-01
The quality of software is essential to corporations in making their commercial software. Good or poorquality to software plays an important role to some systems such as embedded systems, real-time systems,and control systems that play an important aspect in human life. Software products or commercial off theshelf software are usually programmed based on a software quality model. In the software engineeringfield, each quality model contains a set of attributes or characteristics that drives i...
Attribution models and the Cooperative Game Theory
Cano Berlanga, Sebastian; Vilella, Cori
2017-01-01
The current paper studies the attribution model used by Google Analytics. Precisely, we use the Cooperative Game Theory to propose a fair distribution of the revenues among the considered channels, in order to facilitate the cooperation and to guarantee stability. We define a transferable utility convex cooperative game from the observed frequencies and we use the Shapley value to allocate the revenues among the di erent channels. Furthermore, we evaluate the impact of an advertising...
Experimental functional realization of attribute grammar system
Directory of Open Access Journals (Sweden)
I. Attali
2002-07-01
Full Text Available In this paper we present an experimental functional realization of attribute grammar(AG system for personal computers. For AG system functioning only Turbo Prolog compiler is required. The system functioning is based on a specially elaborated metalanguage for AG description, universal syntactic and semantic constructors. The AG system provides automatic generation of target compiler (syntax--oriented software using Turbo Prolog as object language.
Software attribute visualization for high integrity software
Energy Technology Data Exchange (ETDEWEB)
Pollock, G.M.
1998-03-01
This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.
Automatic Constraint Detection for 2D Layout Regularization.
Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter
2016-08-01
In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.
Regularized multivariate regression models with skew-t error distributions
Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi
2014-01-01
We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both
A Regularized Algorithm for the Proximal Split Feasibility Problem
Directory of Open Access Journals (Sweden)
Zhangsong Yao
2014-01-01
Full Text Available The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.
Anaemia in Patients with Diabetes Mellitus attending regular ...
African Journals Online (AJOL)
Anaemia in Patients with Diabetes Mellitus attending regular Diabetic ... Nigerian Journal of Health and Biomedical Sciences ... some patients may omit important food items in their daily diet for fear of increasing their blood sugar level.
Automatic Constraint Detection for 2D Layout Regularization
Jiang, Haiyong
2015-09-18
In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.
Body composition, disordered eating and menstrual regularity in a ...
African Journals Online (AJOL)
Body composition, disordered eating and menstrual regularity in a group of South African ... e between body composition and disordered eating in irregular vs normal menstruating athletes. ... measured by air displacement plethysmography.
A new approach to nonlinear constrained Tikhonov regularization
Ito, Kazufumi; Jin, Bangti
2011-01-01
operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a
Supporting primary school teachers in differentiating in the regular classroom
Eysink, Tessa H.S.; Hulsbeek, Manon; Gijlers, Hannie
Many primary school teachers experience difficulties in effectively differentiating in the regular classroom. This study investigated the effect of the STIP-approach on teachers' differentiation activities and self-efficacy, and children's learning outcomes and instructional value. Teachers using
Lavrentiev regularization method for nonlinear ill-posed problems
International Nuclear Information System (INIS)
Kinh, Nguyen Van
2002-10-01
In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)
Regularized plane-wave least-squares Kirchhoff migration
Wang, Xin; Dai, Wei; Schuster, Gerard T.
2013-01-01
A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity
Automatic Constraint Detection for 2D Layout Regularization
Jiang, Haiyong; Nan, Liangliang; Yan, Dongming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter
2015-01-01
plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing
Calibration of Seismic Attributes for Reservoir Characterization
Energy Technology Data Exchange (ETDEWEB)
Pennington, Wayne D.; Acevedo, Horacio; Green, Aaron; Len, Shawn; Minavea, Anastasia; Wood, James; Xie, Deyi
2002-01-29
This project has completed the initially scheduled third year of the contract, and is beginning a fourth year, designed to expand upon the tech transfer aspects of the project. From the Stratton data set, demonstrated that an apparent correlation between attributes derived along `phantom' horizons are artifacts of isopach changes; only if the interpreter understands that the interpretation is based on this correlation with bed thickening or thinning, can reliable interpretations of channel horizons and facies be made. From the Boonsville data set , developed techniques to use conventional seismic attributes, including seismic facies generated under various neural network procedures, to subdivide regional facies determined from logs into productive and non-productive subfacies, and developed a method involving cross-correlation of seismic waveforms to provide a reliable map of the various facies present in the area. The Teal South data set provided a surprising set of data, leading us to develop a pressure-dependent velocity relationship and to conclude that nearby reservoirs are undergoing a pressure drop in response to the production of the main reservoir, implying that oil is being lost through their spill points, never to be produced. The Wamsutter data set led to the use of unconventional attributes including lateral incoherence and horizon-dependent impedance variations to indicate regions of former sand bars and current high pressure, respectively, and to evaluation of various upscaling routines.
Nuclear and Radiological Forensics and Attribution Overview
International Nuclear Information System (INIS)
Smith, D K; Niemeyer, S
2005-01-01
The goal of the U.S. Department of Homeland Security (DHS) Nuclear and Radiological Forensics and Attribution Program is to develop the technical capability for the nation to rapidly, accurately, and credibly attribute the origins and pathways of interdicted or collected materials, intact nuclear devices, and radiological dispersal devices. A robust attribution capability contributes to threat assessment, prevention, and deterrence of nuclear terrorism; it also supports the Federal Bureau of Investigation (FBI) in its investigative mission to prevent and respond to nuclear terrorism. Development of the capability involves two major elements: (1) the ability to collect evidence and make forensic measurements, and (2) the ability to interpret the forensic data. The Program leverages the existing capability throughout the U.S. Department of Energy (DOE) national laboratory complex in a way that meets the requirements of the FBI and other government users. At the same time the capability is being developed, the Program also conducts investigations for a variety of sponsors using the current capability. The combination of operations and R and D in one program helps to ensure a strong linkage between the needs of the user community and the scientific development
STUDENTS’ ATTRIBUTIONS ON THEIR ENGLISH SPEAKING ENHANCEMENT
Directory of Open Access Journals (Sweden)
Yustinus Mali
2015-01-01
Full Text Available Abstract: Attribution refers to explanations and reasons that people provide for progress, achievement, and even failure towards something they have experienced, particularly in their language learning. This study aimed to investigate the attributions that students had for their English-speaking enhancement. The participants of the study were eighteen students at Sekolah Tinggi Pariwisata Ambarukmo Yogyakarta (STIPRAM. Open-ended questionnaire and interview were used as the instruments to collect the data. On the questionnaire, the participants were specifically asked to provide written responses to three statements, while in the interview process, the researcher involved three participants to provide further clarification toward their written responses on the questionnaire. The data analysis revealed that a clear purpose of doing particular English speaking activities, strategy, and the positive motivation/encouragement from friends as well as from the teacher became the major students’ attributions on their English-speaking enhancement. Besides, this study would seem to indicate that a teacher took an essential role in the enhancement of the students’ English speaking skill. Eventually, this study proposed some pedagogical implications for the development of teaching and learning in English speaking classes specifically in Indonesian context.
[Disability attributable to excess weight in Spain].
Martín-Ramiro, José Javier; Alvarez-Martín, Elena; Gil-Prieto, Ruth
2014-08-19
To estimate the disability attributable to higher than optimal body mass index in the Spanish population in 2006. Excess body weight prevalence data were obtained from the 2006 National Health Survey (NHS), while the prevalence of associated morbidities was extracted from the 2006 NHS and from a national hospital data base. Population attributable fractions were applied and disability attributable was expressed as years life with disability (YLD). In 2006, in the Spanish population aged 35-79 years, 791.650 YLD were lost due to higher than optimal body mass index (46.7% in males and 53.3% in females). Overweight (body mass index 25-29.9) accounted for 45.7% of total YLD. Males YLD were higher than females under 60. The 35-39 quinquennial group showed a difference for males of 16.6% while in the 74-79 group the difference was 23.8% for women. Osteoarthritis and chronic back pain accounted for 60% of YLD while hypertensive disease and type 2 diabetes mellitus were responsible of 37%. Excess body weight is a health risk related to the development of various diseases with an important associated disability burden and social and economical cost. YLD analysis is a useful monitor tool for disease control interventions. Copyright © 2013 Elsevier España, S.L. All rights reserved.
Online Display Advertising Causal Attribution and Evaluation
Barajas Zamora, Joel
2015-01-01
The allocation of a given budget to online display advertising as a marketing channel has motivated the development of statistical methods to measure its effectiveness. Recent studies show that display advertising often triggers online users to search for more information on products. Eventually, many of these users convert at the advertiser’s website. A key challenge is to measure the effectiveness of display advertising when users are exposed to multiple unknown advertising channels.We deve...
Regularization method for solving the inverse scattering problem
International Nuclear Information System (INIS)
Denisov, A.M.; Krylov, A.S.
1985-01-01
The inverse scattering problem for the Schroedinger radial equation consisting in determining the potential according to the scattering phase is considered. The problem of potential restoration according to the phase specified with fixed error in a finite range is solved by the regularization method based on minimization of the Tikhonov's smoothing functional. The regularization method is used for solving the problem of neutron-proton potential restoration according to the scattering phases. The determined potentials are given in the table
Viscous Regularization of the Euler Equations and Entropy Principles
Guermond, Jean-Luc
2014-03-11
This paper investigates a general class of viscous regularizations of the compressible Euler equations. A unique regularization is identified that is compatible with all the generalized entropies, à la [Harten et al., SIAM J. Numer. Anal., 35 (1998), pp. 2117-2127], and satisfies the minimum entropy principle. A connection with a recently proposed phenomenological model by [H. Brenner, Phys. A, 370 (2006), pp. 190-224] is made. © 2014 Society for Industrial and Applied Mathematics.
Dimensional versus lattice regularization within Luescher's Yang Mills theory
International Nuclear Information System (INIS)
Diekmann, B.; Langer, M.; Schuette, D.
1993-01-01
It is pointed out that the coefficients of Luescher's effective model space Hamiltonian, which is based upon dimensional regularization techniques, can be reproduced by applying folded diagram perturbation theory to the Kogut Susskind Hamiltonian and by performing a lattice continuum limit (keeping the volume fixed). Alternative cutoff regularizations of the Hamiltonian are in general inconsistent, the critical point beeing the correct prediction for Luescher's tadpole coefficient which is formally quadratically divergent and which has to become a well defined (negative) number. (orig.)
Left regular bands of groups of left quotients
International Nuclear Information System (INIS)
El-Qallali, A.
1988-10-01
A semigroup S which has a left regular band of groups as a semigroup of left quotients is shown to be the semigroup which is a left regular band of right reversible cancellative semigroups. An alternative characterization is provided by using spinned products. These results are applied to the case where S is a superabundant whose set of idempotents forms a left normal band. (author). 13 refs
Human visual system automatically encodes sequential regularities of discrete events.
Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki
2010-06-01
For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential
Estimation of the global regularity of a multifractional Brownian motion
DEFF Research Database (Denmark)
Lebovits, Joachim; Podolskij, Mark
This paper presents a new estimator of the global regularity index of a multifractional Brownian motion. Our estimation method is based upon a ratio statistic, which compares the realized global quadratic variation of a multifractional Brownian motion at two different frequencies. We show that a ...... that a logarithmic transformation of this statistic converges in probability to the minimum of the Hurst functional parameter, which is, under weak assumptions, identical to the global regularity index of the path....
Regularization of the quantum field theory of charges and monopoles
International Nuclear Information System (INIS)
Panagiotakopoulos, C.
1981-09-01
A gauge invariant regularization procedure for quantum field theories of electric and magnetic charges based on Zwanziger's local formulation is proposed. The bare regularized full Green's functions of gauge invariant operators are shown to be Lorentz invariant. This would have as a consequence the Lorentz invariance of the finite Green's functions that might result after any reasonable subtraction if such a subtraction can be found. (author)
Borderline personality disorder and regularly drinking alcohol before sex.
Thompson, Ronald G; Eaton, Nicholas R; Hu, Mei-Chen; Hasin, Deborah S
2017-07-01
Drinking alcohol before sex increases the likelihood of engaging in unprotected intercourse, having multiple sexual partners and becoming infected with sexually transmitted infections. Borderline personality disorder (BPD), a complex psychiatric disorder characterised by pervasive instability in emotional regulation, self-image, interpersonal relationships and impulse control, is associated with substance use disorders and sexual risk behaviours. However, no study has examined the relationship between BPD and drinking alcohol before sex in the USA. This study examined the association between BPD and regularly drinking before sex in a nationally representative adult sample. Participants were 17 491 sexually active drinkers from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions. Logistic regression models estimated effects of BPD diagnosis, specific borderline diagnostic criteria and BPD criterion count on the likelihood of regularly (mostly or always) drinking alcohol before sex, adjusted for controls. Borderline personality disorder diagnosis doubled the odds of regularly drinking before sex [adjusted odds ratio (AOR) = 2.26; confidence interval (CI) = 1.63, 3.14]. Of nine diagnostic criteria, impulsivity in areas that are self-damaging remained a significant predictor of regularly drinking before sex (AOR = 1.82; CI = 1.42, 2.35). The odds of regularly drinking before sex increased by 20% for each endorsed criterion (AOR = 1.20; CI = 1.14, 1.27) DISCUSSION AND CONCLUSIONS: This is the first study to examine the relationship between BPD and regularly drinking alcohol before sex in the USA. Substance misuse treatment should assess regularly drinking before sex, particularly among patients with BPD, and BPD treatment should assess risk at the intersection of impulsivity, sexual behaviour and substance use. [Thompson Jr RG, Eaton NR, Hu M-C, Hasin DS Borderline personality disorder and regularly drinking alcohol
The Impact of Computerization on Regular Employment (Japanese)
SUNADA Mitsuru; HIGUCHI Yoshio; ABE Masahiro
2004-01-01
This paper uses micro data from the Basic Survey of Japanese Business Structure and Activity to analyze the effects of companies' introduction of information and telecommunications technology on employment structures, especially regular versus non-regular employment. Firstly, examination of trends in the ratio of part-time workers recorded in the Basic Survey shows that part-time worker ratios in manufacturing firms are rising slightly, but that companies with a high proportion of part-timers...
Analytic regularization of the Yukawa model at finite temperature
International Nuclear Information System (INIS)
Malbouisson, A.P.C.; Svaiter, N.F.; Svaiter, B.F.
1996-07-01
It is analysed the one-loop fermionic contribution for the scalar effective potential in the temperature dependent Yukawa model. Ir order to regularize the model a mix between dimensional and analytic regularization procedures is used. It is found a general expression for the fermionic contribution in arbitrary spacetime dimension. It is also found that in D = 3 this contribution is finite. (author). 19 refs
The relationship between lifestyle regularity and subjective sleep quality
Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.
2003-01-01
In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.
Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions
International Nuclear Information System (INIS)
Lin, Hongxia; Du, Lili
2013-01-01
In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263–74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier–Stokes equations Indiana Univ. Math. J. 57 2643–61; 2011 Gobal regularity criterion for the 3D Navier–Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919–32) for 3D incompressible Navier–Stokes equations. (paper)
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
Bounded Perturbation Regularization for Linear Least Squares Estimation
Ballal, Tarig
2017-10-18
This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.
Near-Regular Structure Discovery Using Linear Programming
Huang, Qixing
2014-06-02
Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.
Hu, Zhouyi; Chan, Raymond C K; McAlonan, Grainne M
2010-02-03
The assessment of social attribution skills in children can potentially identify and quantify developmental difficulties related to autism spectrum disorders and related conditions. However, relatively little is known about how these skills develop in typically developing children. Therefore the present study aimed to map the trajectory of social attribution skill acquisition in typically developing children from a young age. In the conventional social attribution task (SAT) participants ascribe feelings to moving shapes and describe their interaction in social terms. However, this format requires that participants understand both, that an inanimate shape is symbolic, and that its action is social in nature. This may be challenging for young children, and may be a potential confounder in studies of children with developmental disorders. Therefore we developed a modified SAT (mSAT) using animate figures (e.g. animals) to simplify the task. We used the SAT and mSAT to examine social attribution skill development in 154 healthy children (76 boys, 78 girls), ranging in age from 6 to 13 years and investigated the relationship between social attribution ability and executive function. The mSAT revealed a steady improvement in social attribution skills from the age of 6 years, and a significant advantage for girls compared to boys. In contrast, children under the age of 9 years performed at baseline on the conventional format and there were no gender differences apparent. Performance on neither task correlated with executive function after controlling for age and verbal IQ, suggesting that social attribution ability is independent of cognitive functioning. The present findings indicate that the mSAT is a sensitive measure of social attribution skills from a young age. This should be carefully considered when choosing assessments for young children and those with developmental disorders.
Directory of Open Access Journals (Sweden)
Chan Raymond CK
2010-02-01
Full Text Available Abstract Background The assessment of social attribution skills in children can potentially identify and quantify developmental difficulties related to autism spectrum disorders and related conditions. However, relatively little is known about how these skills develop in typically developing children. Therefore the present study aimed to map the trajectory of social attribution skill acquisition in typically developing children from a young age. Methods In the conventional social attribution task (SAT participants ascribe feelings to moving shapes and describe their interaction in social terms. However, this format requires that participants understand both, that an inanimate shape is symbolic, and that its action is social in nature. This may be challenging for young children, and may be a potential confounder in studies of children with developmental disorders. Therefore we developed a modified SAT (mSAT using animate figures (e.g. animals to simplify the task. We used the SAT and mSAT to examine social attribution skill development in 154 healthy children (76 boys, 78 girls, ranging in age from 6 to 13 years and investigated the relationship between social attribution ability and executive function. Results The mSAT revealed a steady improvement in social attribution skills from the age of 6 years, and a significant advantage for girls compared to boys. In contrast, children under the age of 9 years performed at baseline on the conventional format and there were no gender differences apparent. Performance on neither task correlated with executive function after controlling for age and verbal IQ, suggesting that social attribution ability is independent of cognitive functioning. The present findings indicate that the mSAT is a sensitive measure of social attribution skills from a young age. This should be carefully considered when choosing assessments for young children and those with developmental disorders.
Directory of Open Access Journals (Sweden)
Waka Fujisaki
2011-10-01
Full Text Available An informative performance measure of the brain's integration across different sensory attributes/modalities is the critical temporal rate of feature alternation (between, eg, red and green beyond which observers could not identify the feature value specified by a timing signal from another attribute (eg, a pitch change. Interestingly, this limit, which we called the critical crowding frequency (CCF, is fairly low and nearly constant (∼2.5 Hz regardless of the combination of attributes and modalities (Fujisaki & Nishida, 2010, IMRF. One may consider that the CCF reflects the processing time required for the brain to identify the specified feature value on the fly. According to this idea, the similarity in CCF could be ascribed to the similarity in identification time for the attributes we used (luminance, color, orientation, pitch, vibration. To test this idea, we estimated the identification time of each attribute from [Go/ No-Go choice reaction time – simple reaction time]. In disagreement with the prediction, we found significant differences among attributes (eg, ∼160 ms for orientation, ∼70 ms for pitch. The results are more consistent with our proposal (Fujisaki & Nishida, Proc Roy Soc B that the CCF reflects the common rate limit of specifying what happens when (timing-content binding by a central, presumably postdictive, mechanism.
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
Energy Technology Data Exchange (ETDEWEB)
Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used
Burden attributable to child maltreatment in Australia.
Moore, Sophie E; Scott, James G; Ferrari, Alize J; Mills, Ryan; Dunne, Michael P; Erskine, Holly E; Devries, Karen M; Degenhardt, Louisa; Vos, Theo; Whiteford, Harvey A; McCarthy, Molly; Norman, Rosana E
2015-10-01
Child maltreatment is a complex phenomenon, with four main types (childhood sexual abuse, physical abuse, emotional abuse, and neglect) highly interrelated. All types of maltreatment have been linked to adverse health consequences and exposure to multiple forms of maltreatment increases risk. In Australia to date, only burden attributable to childhood sexual abuse has been estimated. This study synthesized the national evidence and quantified the burden attributable to the four main types of child maltreatment. Meta-analyses, based on quality-effects models, generated pooled prevalence estimates for each maltreatment type. Exposure to child maltreatment was examined as a risk factor for depressive disorders, anxiety disorders and intentional self-harm using counterfactual estimation and comparative risk assessment methods. Adjustments were made for co-occurrence of multiple forms of child maltreatment. Overall, an estimated 23.5% of self-harm, 20.9% of anxiety disorders and 15.7% of depressive disorders burden in males; and 33.0% of self-harm, 30.6% of anxiety disorders and 22.8% of depressive disorders burden in females was attributable to child maltreatment. Child maltreatment was estimated to cause 1.4% (95% uncertainty interval 0.4-2.3%) of all disability-adjusted life years (DALYs) in males, and 2.4% (0.7-4.1%) of all DALYs in females in Australia in 2010. Child maltreatment contributes to a substantial proportion of burden from depressive and anxiety disorders and intentional self-harm in Australia. This study demonstrates the importance of including all forms of child maltreatment as risk factors in future burden of disease studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
The neural substrates of impaired finger tapping regularity after stroke.
Calautti, Cinzia; Jones, P Simon; Guincestre, Jean-Yves; Naccarato, Marcello; Sharma, Nikhil; Day, Diana J; Carpenter, T Adrian; Warburton, Elizabeth A; Baron, Jean-Claude
2010-03-01
Not only finger tapping speed, but also tapping regularity can be impaired after stroke, contributing to reduced dexterity. The neural substrates of impaired tapping regularity after stroke are unknown. Previous work suggests damage to the dorsal premotor cortex (PMd) and prefrontal cortex (PFCx) affects externally-cued hand movement. We tested the hypothesis that these two areas are involved in impaired post-stroke tapping regularity. In 19 right-handed patients (15 men/4 women; age 45-80 years; purely subcortical in 16) partially to fully recovered from hemiparetic stroke, tri-axial accelerometric quantitative assessment of tapping regularity and BOLD fMRI were obtained during fixed-rate auditory-cued index-thumb tapping, in a single session 10-230 days after stroke. A strong random-effect correlation between tapping regularity index and fMRI signal was found in contralesional PMd such that the worse the regularity the stronger the activation. A significant correlation in the opposite direction was also present within contralesional PFCx. Both correlations were maintained if maximal index tapping speed, degree of paresis and time since stroke were added as potential confounds. Thus, the contralesional PMd and PFCx appear to be involved in the impaired ability of stroke patients to fingertap in pace with external cues. The findings for PMd are consistent with repetitive TMS investigations in stroke suggesting a role for this area in affected-hand movement timing. The inverse relationship with tapping regularity observed for the PFCx and the PMd suggests these two anatomically-connected areas negatively co-operate. These findings have implications for understanding the disruption and reorganization of the motor systems after stroke. Copyright (c) 2009 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Kowalenko, V.; Rawlinson, A.A.
1998-01-01
We introduce the numerical technique of Mellin-Barnes regularization, which can be used to evaluate both convergent and divergent series. The technique is shown to be numerically equivalent to the corresponding results obtained by Borel summation. Both techniques are then applied to the Bender-Wu formula, which represents an asymptotic expansion for the energy levels of the anharmonic oscillator. We find that this formula is unable to give accurate values for the ground state energy, particularly when the coupling is greater than 0.1. As a consequence, the inability of the Bender-Wu formula to yield exact values for the energy level of the anharmonic oscillator cannot be attributed to its asymptotic nature. (authors)
Regularities in development of surface cracks in low-alloy steel under asymmetric cyclic bending
International Nuclear Information System (INIS)
Letunov, V.I.; Shul'ginov, B.S.; Plundrova, I.; Vajnshtok, V.A.; Kramarenko, I.V.
1985-01-01
Semielliptical cracks in low-alloy 09g2 and 12gn2mfayu steels are studied for regularities of their growth. It is shown that the growth rate of the semielliptical crack at the preset ΔK and R values is higher in the maximally depressed point of the front than in the point on the surface on the specimen under cyclic bending. A decrease of the 1/C parameter with growth of the semielliptical crack is experimentally established being attributed to the increase in difference of ΔK both in maximally depressed point of the crack front (phi=0) and in the point on the specimen surface (phi= π/2). Experiments have proved the correctness of the previously established formulas of stress-intensity factor calculation for semielliptical surface cracks under bending
Detection and attribution of extreme weather disasters
Huggel, Christian; Stone, Dáithí; Hansen, Gerrit
2014-05-01
Single disasters related to extreme weather events have caused loss and damage on the order of up to tens of billions US dollars over the past years. Recent disasters fueled the debate about whether and to what extent these events are related to climate change. In international climate negotiations disaster loss and damage is now high on the agenda, and related policy mechanisms have been discussed or are being implemented. In view of funding allocation and effective risk reduction strategies detection and attribution to climate change of extreme weather events and disasters is a key issue. Different avenues have so far been taken to address detection and attribution in this context. Physical climate sciences have developed approaches, among others, where variables that are reasonably sampled over climatically relevant time periods and related to the meteorological characteristics of the extreme event are examined. Trends in these variables (e.g. air or sea surface temperatures) are compared between observations and climate simulations with and without anthropogenic forcing. Generally, progress has been made in recent years in attribution of changes in the chance of some single extreme weather events to anthropogenic climate change but there remain important challenges. A different line of research is primarily concerned with losses related to the extreme weather events over time, using disaster databases. A growing consensus is that the increase in asset values and in exposure are main drivers of the strong increase of economic losses over the past several decades, and only a limited number of studies have found trends consistent with expectations from climate change. Here we propose a better integration of existing lines of research in detection and attribution of extreme weather events and disasters by applying a risk framework. Risk is thereby defined as a function of the probability of occurrence of an extreme weather event, and the associated consequences
The mortality experience of a group of Newfoundland fluorspar miners exposed to Rn progeny
International Nuclear Information System (INIS)
Morrison, H.; Semenciw, R.; Mao, Y.; Wigle, D.
1988-02-01
A cohort study of the mortality experience (1950-1984) of 1,772 Newfoundland fluorspar miners occupationally exposed to high levels of radon daughters has been conducted using two control groups (surface workers and Newfoundland males). Observed numbers of cancers of the lung, salivary gland and buccal cavity/pharynx were significantly elevated among underground miners. A highly significant relationship was noted between radon daughter exposure and risk of dying of lung cancer; the small numbers of salivary gland (n = 2) and buccal cavity/pharynx cancers (n = 6) precluded meaningful analysis of dose-response. Also significantly elevated among underground miners were deaths from silicosis and pneumoconioses. No statistically significant excess was found for any cause of death among surface workers. Using external controls, attributable and relative risk coefficients for lung cancer were estimated as 6.3 per working level month per million person-years and 0.89 percent per working level month respectively. Attributable risk coefficients were similar to some, but not all related mining studies. Relative risk coefficients were highest for those first exposed attributable risks to non-smokers. Relative risks fell sharply with age at observation whereas attributable risks were lowest in the youngest and oldest age groups. Using the risk coefficients from the present study, a miner exposed for 30 years at 4 WLM per year from age 20 has a risk of 7,366 per 100,000 of dying of lung cancer by age 70 using the relative risk model and a risk of 6,371 per 100,000 using the attributable risk model. This compares to 3,740 per 100,000 for a non-exposed male. 85 refs
Reducing errors in the GRACE gravity solutions using regularization
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2012-09-01
The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4
Variable precision rough set for multiple decision attribute analysis
Institute of Scientific and Technical Information of China (English)
Lai; Kin; Keung
2008-01-01
A variable precision rough set (VPRS) model is used to solve the multi-attribute decision analysis (MADA) problem with multiple conflicting decision attributes and multiple condition attributes. By introducing confidence measures and a β-reduct, the VPRS model can rationally solve the conflicting decision analysis problem with multiple decision attributes and multiple condition attributes. For illustration, a medical diagnosis example is utilized to show the feasibility of the VPRS model in solving the MADA...
Rotating Hayward’s regular black hole as particle accelerator
International Nuclear Information System (INIS)
Amir, Muhammed; Ghosh, Sushant G.
2015-01-01
Recently, Bañados, Silk and West (BSW) demonstrated that the extremal Kerr black hole can act as a particle accelerator with arbitrarily high center-of-mass energy (E CM ) when the collision takes place near the horizon. The rotating Hayward’s regular black hole, apart from Mass (M) and angular momentum (a), has a new parameter g (g>0 is a constant) that provides a deviation from the Kerr black hole. We demonstrate that for each g, with M=1, there exist critical a E and r H E , which corresponds to a regular extremal black hole with degenerate horizons, and a E decreases whereas r H E increases with increase in g. While aregular non-extremal black hole with outer and inner horizons. We apply the BSW process to the rotating Hayward’s regular black hole, for different g, and demonstrate numerically that the E CM diverges in the vicinity of the horizon for the extremal cases thereby suggesting that a rotating regular black hole can also act as a particle accelerator and thus in turn provide a suitable framework for Plank-scale physics. For a non-extremal case, there always exist a finite upper bound for the E CM , which increases with the deviation parameter g.
Consistent Partial Least Squares Path Modeling via Regularization
Directory of Open Access Journals (Sweden)
Sunho Jung
2018-02-01
Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
Method of transferring regular shaped vessel into cell
International Nuclear Information System (INIS)
Murai, Tsunehiko.
1997-01-01
The present invention concerns a method of transferring regular shaped vessels from a non-contaminated area to a contaminated cell. A passage hole for allowing the regular shaped vessels to pass in the longitudinal direction is formed to a partitioning wall at the bottom of the contaminated cell. A plurality of regular shaped vessel are stacked in multiple stages in a vertical direction from the non-contaminated area present below the passage hole, allowed to pass while being urged and transferred successively into the contaminated cell. As a result, since they are transferred while substantially closing the passage hole by the regular shaped vessels, radiation rays or contaminated materials are prevented from discharging from the contaminated cell to the non-contaminated area. Since there is no requirement to open/close an isolation door frequently, the workability upon transfer can be improved remarkably. In addition, the sealing member for sealing the gap between the regular shaped vessel passing through the passage hole and the partitioning wall of the bottom is disposed to the passage hole, the contaminated materials in the contaminated cells can be prevented from discharging from the gap to the non-contaminated area. (N.H.)
X-ray computed tomography using curvelet sparse regularization.
Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias
2015-04-01
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
On the MSE Performance and Optimization of Regularized Problems
Alrashdi, Ayed
2016-11-01
The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.
Immediately modifiable risk factors attributable to colorectal cancer in Malaysia.
Naing, Cho; Lai, Pei Kuan; Mak, Joon Wah
2017-08-04
This study aimed to estimate potential reductions in case incidence of colorectal cancer attributable to the modifiable risk factors such as alcohol consumption, overweight and physical inactivity amongst the Malaysian population. Gender specific population-attributable fractions (PAFs) for colorectal cancer in Malaysia were estimated for the three selected risk factors (physical inactivity, overweight, and alcohol consumptions). Exposure prevalence were sourced from a large-scale national representative survey. Risk estimates of the relationship between the exposure of interest and colorectal cancer were obtained from published meta-analyses. The overall PAF was then estimated, using the 2013 national cancer incidence data from the Malaysian Cancer Registry. Overall, the mean incidence rate for colorectal cancer in Malaysia from 2008 to 2013 was 21.3 per 100,000 population, with the mean age of 61.6 years (±12.7) and the majority were men (56.6%). Amongst 369 colorectal cancer cases in 2013, 40 cases (20 men, 20 women), 10 cases (9 men, 1 woman) or 20 cases (16 men,4 women) would be prevented, if they had done physical exercises, could reduce their body weight to normal level or avoided alcohol consumption, assuming that these factors are causally related to colorectal cancer. It was estimated that 66 (17.8%;66/369) colorectal cancer cases (42 men, 24 women) who had all these three risk factors for the last 10 years would have been prevented, if they could control these three risk factors through effective preventive measures. Findings suggest that approximately 18% of colorectal cancer cases in Malaysia would be prevented through appropriate preventive measures such as doing regular physical exercises, reducing their body weight to normal level and avoiding alcohol consumption, if these factors are causally related to colorectal cancer. Scaling-up nationwide public health campaigns tailored to increase physical activity, controlling body weight within normal
Nonintrusive verification attributes for excess fissile materials
International Nuclear Information System (INIS)
Nicholas, N.J.; Eccleston, G.W.; Fearey, B.L.
1997-10-01
Under US initiatives, over two hundred metric tons of fissile materials have been declared to be excess to national defense needs. These excess materials are in both classified and unclassified forms. The US has expressed the intent to place these materials under international inspections as soon as practicable. To support these commitments, members of the US technical community are examining a variety of nonintrusive approaches (i.e., those that would not reveal classified or sensitive information) for verification of a range of potential declarations for these classified and unclassified materials. The most troublesome and potentially difficult issues involve approaches for international inspection of classified materials. The primary focus of the work to date has been on the measurement of signatures of relevant materials attributes (e.g., element, identification number, isotopic ratios, etc.), especially those related to classified materials and items. The authors are examining potential attributes and related measurement technologies in the context of possible verification approaches. The paper will discuss the current status of these activities, including their development, assessment, and benchmarking status
Modeling Aquatic Macroinvertebrate Richness Using Landscape Attributes
Directory of Open Access Journals (Sweden)
Marcia S. Meixler
2015-01-01
Full Text Available We used a rapid, repeatable, and inexpensive geographic information system (GIS approach to predict aquatic macroinvertebrate family richness using the landscape attributes stream gradient, riparian forest cover, and water quality. Stream segments in the Allegheny River basin were classified into eight habitat classes using these three landscape attributes. Biological databases linking macroinvertebrate families with habitat classes were developed using life habits, feeding guilds, and water quality preferences and tolerances for each family. The biological databases provided a link between fauna and habitat enabling estimation of family composition in each habitat class and hence richness predictions for each stream segment. No difference was detected between field collected and modeled predictions of macroinvertebrate families in a paired t-test. Further, predicted stream gradient, riparian forest cover, and total phosphorus, total nitrogen, and suspended sediment classifications matched observed classifications much more often than by chance alone. High gradient streams with forested riparian zones and good water quality were predicted to have the greatest macroinvertebrate family richness and changes in water quality were predicted to have the greatest impact on richness. Our findings indicate that our model can provide meaningful landscape scale macroinvertebrate family richness predictions from widely available data for use in focusing conservation planning efforts.
Yu, Han; Hageman Blair, Rachael
2016-01-01
Understanding community structure in networks has received considerable attention in recent years. Detecting and leveraging community structure holds promise for understanding and potentially intervening with the spread of influence. Network features of this type have important implications in a number of research areas, including, marketing, social networks, and biology. However, an overwhelming majority of traditional approaches to community detection cannot readily incorporate information of node attributes. Integrating structural and attribute information is a major challenge. We propose a exible iterative method; inverse regularized Markov Clustering (irMCL), to network clustering via the manipulation of the transition probability matrix (aka stochastic flow) corresponding to a graph. Similar to traditional Markov Clustering, irMCL iterates between "expand" and "inflate" operations, which aim to strengthen the intra-cluster flow, while weakening the inter-cluster flow. Attribute information is directly incorporated into the iterative method through a sigmoid (logistic function) that naturally dampens attribute influence that is contradictory to the stochastic flow through the network. We demonstrate advantages and the exibility of our approach using simulations and real data. We highlight an application that integrates breast cancer gene expression data set and a functional network defined via KEGG pathways reveal significant modules for survival.
Dyadic Event Attribution in Social Networks with Mixtures of Hawkes Processes.
Li, Liangda; Zha, Hongyuan
2013-01-01
In many applications in social network analysis, it is important to model the interactions and infer the influence between pairs of actors, leading to the problem of dyadic event modeling which has attracted increasing interests recently. In this paper we focus on the problem of dyadic event attribution, an important missing data problem in dyadic event modeling where one needs to infer the missing actor-pairs of a subset of dyadic events based on their observed timestamps. Existing works either use fixed model parameters and heuristic rules for event attribution, or assume the dyadic events across actor-pairs are independent. To address those shortcomings we propose a probabilistic model based on mixtures of Hawkes processes that simultaneously tackles event attribution and network parameter inference, taking into consideration the dependency among dyadic events that share at least one actor. We also investigate using additive models to incorporate regularization to avoid overfitting. Our experiments on both synthetic and real-world data sets on international armed conflicts suggest that the proposed new method is capable of significantly improve accuracy when compared with the state-of-the-art for dyadic event attribution.
Calibration of Seismic Attributes for Reservoir Characterization
Energy Technology Data Exchange (ETDEWEB)
Wayne D. Pennington
2002-09-29
The project, "Calibration of Seismic Attributes for Reservoir Characterization," is now complete. Our original proposed scope of work included detailed analysis of seismic and other data from two to three hydrocarbon fields; we have analyzed data from four fields at this level of detail, two additional fields with less detail, and one other 2D seismic line used for experimentation. We also included time-lapse seismic data with ocean-bottom cable recordings in addition to the originally proposed static field data. A large number of publications and presentations have resulted from this work, inlcuding several that are in final stages of preparation or printing; one of these is a chapter on "Reservoir Geophysics" for the new Petroleum Engineering Handbook from the Society of Petroleum Engineers. Major results from this project include a new approach to evaluating seismic attributes in time-lapse monitoring studies, evaluation of pitfalls in the use of point-based measurements and facies classifications, novel applications of inversion results, improved methods of tying seismic data to the wellbore, and a comparison of methods used to detect pressure compartments. Some of the data sets used are in the public domain, allowing other investigators to test our techniques or to improve upon them using the same data. From the public-domain Stratton data set we have demonstrated that an apparent correlation between attributes derived along 'phantom' horizons are artifacts of isopach changes; only if the interpreter understands that the interpretation is based on this correlation with bed thickening or thinning, can reliable interpretations of channel horizons and facies be made. From the public-domain Boonsville data set we developed techniques to use conventional seismic attributes, including seismic facies generated under various neural network procedures, to subdivide regional facies determined from logs into productive and non-productive subfacies, and we
CALIBRATION OF SEISMIC ATTRIBUTES FOR RESERVOIR CHARACTERIZATION
Energy Technology Data Exchange (ETDEWEB)
Wayne D. Pennington; Horacio Acevedo; Aaron Green; Joshua Haataja; Shawn Len; Anastasia Minaeva; Deyi Xie
2002-10-01
The project, ''Calibration of Seismic Attributes for Reservoir Calibration,'' is now complete. Our original proposed scope of work included detailed analysis of seismic and other data from two to three hydrocarbon fields; we have analyzed data from four fields at this level of detail, two additional fields with less detail, and one other 2D seismic line used for experimentation. We also included time-lapse seismic data with ocean-bottom cable recordings in addition to the originally proposed static field data. A large number of publications and presentations have resulted from this work, including several that are in final stages of preparation or printing; one of these is a chapter on ''Reservoir Geophysics'' for the new Petroleum Engineering Handbook from the Society of Petroleum Engineers. Major results from this project include a new approach to evaluating seismic attributes in time-lapse monitoring studies, evaluation of pitfalls in the use of point-based measurements and facies classifications, novel applications of inversion results, improved methods of tying seismic data to the wellbore, and a comparison of methods used to detect pressure compartments. Some of the data sets used are in the public domain, allowing other investigators to test our techniques or to improve upon them using the same data. From the public-domain Stratton data set we have demonstrated that an apparent correlation between attributes derived along ''phantom'' horizons are artifacts of isopach changes; only if the interpreter understands that the interpretation is based on this correlation with bed thickening or thinning, can reliable interpretations of channel horizons and facies be made. From the public-domain Boonsville data set we developed techniques to use conventional seismic attributes, including seismic facies generated under various neural network procedures, to subdivide regional facies determined from logs into
International Nuclear Information System (INIS)
Randic, M.; Wilkins, C.L.
1979-01-01
Selected molecular data on alkanes have been reexamined in a search for general regularities in isomeric variations. In contrast to the prevailing approaches concerned with fitting data by searching for optimal parameterization, the present work is primarily aimed at established trends, i.e., searching for relative magnitudes and their regularities among the isomers. Such an approach is complementary to curve fitting or correlation seeking procedures. It is particularly useful when there are incomplete data which allow trends to be recognized but no quantitative correlation to be established. One proceeds by first ordering structures. One way is to consider molecular graphs and enumerate paths of different length as the basic graph invariant. It can be shown that, for several thermodynamic molecular properties, the number of paths of length two (p 2 ) and length three (p 3 ) are critical. Hence, an ordering based on p 2 and p 3 indicates possible trends and behavior for many molecular properties, some of which relate to others, some which do not. By considering a grid graph derived by attributing to each isomer coordinates (p 2 ,p 3 ) and connecting points along the coordinate axis, one obtains a simple presentation useful for isomer structural interrelations. This skeletal frame is one upon which possible trends for different molecular properties may be conveniently represented. The significance of the results and their conceptual value is discussed. 16 figures, 3 tables
Healthcare costs attributable to secondhand smoke exposure at home for U.S. adults.
Yao, Tingting; Sung, Hai-Yen; Wang, Yingning; Lightwood, James; Max, Wendy
2018-03-01
To estimate healthcare costs attributable to secondhand smoke (SHS) exposure at home among nonsmoking adults (18+) in the U.S. We analyzed data on nonsmoking adults (N=67,735) from the 2000, 2005, and 2010 (the latest available data on SHS exposure at home) U.S. National Health Interview Surveys. This study was conducted from 2015 to 2017. We examined hospital nights, home care visits, doctor visits, and emergency room (ER) visits. For each, we analyzed the association of SHS exposure at home with healthcare utilization with a Zero-Inflated Poisson regression model controlling for socio-demographic and other risk characteristics. Excess healthcare utilization attributable to SHS exposure at home was determined and multiplied by unit costs derived from the 2014 Medical Expenditures Panel Survey to determine annual SHS-attributable healthcare costs. SHS exposure at home was positively associated with hospital nights and ER visits, but was not statistically associated with home care visits and doctor visits. Exposed adults had 1.28 times more hospital nights and 1.16 times more ER visits than non-exposed adults. Annual SHS-attributable healthcare costs totaled $4.6 billion (including $3.8 billion for hospital nights and $0.8 billion for ER visits, 2014 dollars) in 2000, $2.1 billion (including $1.8 billion for hospital nights and $0.3 billion for ER visits) in 2005, and $1.9 billion (including $1.6 billion for hospital nights and $0.4 billion for ER visits) in 2010. SHS-attributable costs remain high, but have fallen over time. Tobacco control efforts are needed to further reduce SHS exposure at home and associated healthcare costs. Copyright © 2017. Published by Elsevier Inc.
Leukemias in the progeny of exposed parents
International Nuclear Information System (INIS)
Kosenko, M.M.; Gudkova, N.V.
1996-01-01
The purpose of this study was to assess the incidence of leukemias among the progeny of exposed parents. The parents were exposed as a result of discharge of radioactive waste from the Mayak atomic plant into the Techa river in the Southern Urals. The doses per parents gonads, ranging from 0.035 to 1.27 Sv, were due to external exposure in 1950-1956 and to incorporation of Cs-137. Nine cases with leukemia and four with lympohoma were recorded in 13.500 antenatally exposed subjects and descendants of exposed parents over the period of 1950 to 1988. The leukemia morbidity index for the progeny of exposed parents was 2.51, which virtually not statistically differ from that in control group. Refs. 7, figs. 3, tabs. 3
Regular pipeline maintenance of gas pipeline using technical operational diagnostics methods
Energy Technology Data Exchange (ETDEWEB)
Volentic, J [Gas Transportation Department, Slovensky plynarensky priemysel, Slovak Gas Industry, Bratislava (Slovakia)
1998-12-31
Slovensky plynarensky priemysel (SPP) has operated 17 487 km of gas pipelines in 1995. The length of the long-line pipelines reached 5 191 km, distribution network was 12 296 km. The international transit system of long-line gas pipelines ranged 1 939 km of pipelines of various dimensions. The described scale of transport and distribution system represents a multibillion investments stored in the ground, which are exposed to the environmental influences and to pipeline operational stresses. In spite of all technical and maintenance arrangements, which have to be performed upon operating gas pipelines, the gradual ageing takes place anyway, expressed in degradation process both in steel tube, as well as in the anti-corrosion coating. Within a certain time horizon, a consistent and regular application of methods and means of in-service technical diagnostics and rehabilitation of existing pipeline systems make it possible to save substantial investment funds, postponing the need in funds for a complex or partial reconstruction or a new construction of a specific gas section. The purpose of this presentation is to report on the implementation of the programme of in-service technical diagnostics of gas pipelines within the framework of regular maintenance of SPP s.p. Bratislava high pressure gas pipelines. (orig.) 6 refs.