WorldWideScience

Sample records for regularity exposing attributes

  1. Regularization of Instantaneous Frequency Attribute Computations

    Science.gov (United States)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  2. Developing logistic regression models using purchase attributes and demographics to predict the probability of purchases of regular and specialty eggs.

    Science.gov (United States)

    Bejaei, M; Wiseman, K; Cheng, K M

    2015-01-01

    Consumers' interest in specialty eggs appears to be growing in Europe and North America. The objective of this research was to develop logistic regression models that utilise purchaser attributes and demographics to predict the probability of a consumer purchasing a specific type of table egg including regular (white and brown), non-caged (free-run, free-range and organic) or nutrient-enhanced eggs. These purchase prediction models, together with the purchasers' attributes, can be used to assess market opportunities of different egg types specifically in British Columbia (BC). An online survey was used to gather data for the models. A total of 702 completed questionnaires were submitted by BC residents. Selected independent variables included in the logistic regression to develop models for different egg types to predict the probability of a consumer purchasing a specific type of table egg. The variables used in the model accounted for 54% and 49% of variances in the purchase of regular and non-caged eggs, respectively. Research results indicate that consumers of different egg types exhibit a set of unique and statistically significant characteristics and/or demographics. For example, consumers of regular eggs were less educated, older, price sensitive, major chain store buyers, and store flyer users, and had lower awareness about different types of eggs and less concern regarding animal welfare issues. However, most of the non-caged egg consumers were less concerned about price, had higher awareness about different types of table eggs, purchased their eggs from local/organic grocery stores, farm gates or farmers markets, and they were more concerned about care and feeding of hens compared to consumers of other eggs types.

  3. Network structure exploration in networks with node attributes

    Science.gov (United States)

    Chen, Yi; Wang, Xiaolong; Bu, Junzhao; Tang, Buzhou; Xiang, Xin

    2016-05-01

    Complex networks provide a powerful way to represent complex systems and have been widely studied during the past several years. One of the most important tasks of network analysis is to detect structures (also called structural regularities) embedded in networks by determining group number and group partition. Most of network structure exploration models only consider network links. However, in real world networks, nodes may have attributes that are useful for network structure exploration. In this paper, we propose a novel Bayesian nonparametric (BNP) model to explore structural regularities in networks with node attributes, called Bayesian nonparametric attribute (BNPA) model. This model does not only take full advantage of both links between nodes and node attributes for group partition via shared hidden variables, but also determine group number automatically via the Bayesian nonparametric theory. Experiments conducted on a number of real and synthetic networks show that our BNPA model is able to automatically explore structural regularities in networks with node attributes and is competitive with other state-of-the-art models.

  4. Architectural patterns and quality attributes interaction

    NARCIS (Netherlands)

    Me, G.; Calero Munoz, C.; Lago, P.; Muccini, H.

    2016-01-01

    Architectural patterns and styles represent common solutions to recurrent problems. They encompass architectural knowledge about how to achieve holistic system quality. The relation between patterns (or styles) and quality attributes has been regularly addressed in the literature. However, there is

  5. Attribute importance segmentation of Norwegian seafood consumers: The inclusion of salient packaging attributes.

    Science.gov (United States)

    Olsen, Svein Ottar; Tuu, Ho Huu; Grunert, Klaus G

    2017-10-01

    The main purpose of this study is to identify consumer segments based on the importance of product attributes when buying seafood for homemade meals on weekdays. There is a particular focus on the relative importance of the packaging attributes of fresh seafood. The results are based on a representative survey of 840 Norwegian consumers between 18 and 80 years of age. This study found that taste, freshness, nutritional value and naturalness are the most important attributes for the home consumption of seafood. Except for the high importance of information about expiration date, most other packaging attributes have only medium importance. Three consumer segments are identified based on the importance of 33 attributes associated with seafood: Perfectionists, Quality Conscious and Careless Consumers. The Quality Conscious consumers feel more self-confident in their evaluation of quality, and are less concerned with packaging, branding, convenience and emotional benefits compared to the Perfectionists. Careless Consumers are important as regular consumers of convenient and pre-packed seafood products and value recipe information on the packaging. The seafood industry may use the results provided in this study to strengthen their positioning of seafood across three different consumer segments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  7. Preference mapping of lemon lime carbonated beverages with regular and diet beverage consumers.

    Science.gov (United States)

    Leksrisompong, P P; Lopetcharat, K; Guthrie, B; Drake, M A

    2013-02-01

    The drivers of liking of lemon-lime carbonated beverages were investigated with regular and diet beverage consumers. Ten beverages were selected from a category survey of commercial beverages using a D-optimal procedure. Beverages were subjected to consumer testing (n = 101 regular beverage consumers, n = 100 diet beverage consumers). Segmentation of consumers was performed on overall liking scores followed by external preference mapping of selected samples. Diet beverage consumers liked 2 diet beverages more than regular beverage consumers. There were no differences in the overall liking scores between diet and regular beverage consumers for other products except for a sparkling beverage sweetened with juice which was more liked by regular beverage consumers. Three subtle but distinct consumer preference clusters were identified. Two segments had evenly distributed diet and regular beverage consumers but one segment had a greater percentage of regular beverage consumers (P beverage consumers) did not have a large impact on carbonated beverage liking. Instead, mouthfeel attributes were major drivers of liking when these beverages were tested in a blind tasting. Preference mapping of lemon-lime carbonated beverage with diet and regular beverage consumers allowed the determination of drivers of liking of both populations. The understanding of how mouthfeel attributes, aromatics, and basic tastes impact liking or disliking of products was achieved. Preference drivers established in this study provide product developers of carbonated lemon-lime beverages with additional information to develop beverages that may be suitable for different groups of consumers. © 2013 Institute of Food Technologists®

  8. Influence of whitening and regular dentifrices on orthodontic clear ligature color stability.

    Science.gov (United States)

    Oliveira, Adauê S; Kaizer, Marina R; Salgado, Vinícius E; Soldati, Dener C; Silva, Roberta C; Moraes, Rafael R

    2015-01-01

    This study evaluated the effect of brushing orthodontic clear ligatures with a whitening dentifrice containing a blue pigment (Close Up White Now, Unilever, London, UK) on their color stability, when exposed to a staining agent. Ligatures from 3M Unitek (Monrovia, CA, USA) and Morelli (Sorocaba, SP, Brazil) were tested. Baseline color measurements were performed and nonstained groups (control) were stored in distilled water whereas test groups were exposed for 1 hour daily to red wine. Specimens were brushed daily using regular or whitening dentifrice. Color measurements were repeated after 7, 14, 21, and 28 days using a spectrophotometer based on the CIE L*a*b* system. Decreased luminosity (CIE L*), increased red discoloration (CIE a* axis), and increased yellow discoloration (CIE b* axis) were generally observed for ligatures exposed to the staining agent. Color variation was generally lower in specimens brushed with regular dentifrice, but ligatures brushed with whitening dentifrice were generally less red and less yellow than regular dentifrice. The whitening dentifrice led to blue discoloration trend, with visually detectable differences particularly apparent according to storage condition and ligature brand. The whitening dentifrice containing blue pigment did not improve the ligature color stability, but it decreased yellow discoloration and increased a blue coloration. The use of a whitening dentifrice containing blue pigment during orthodontic treatment might decrease the yellow discoloration of elastic ligatures. © 2015 Wiley Periodicals, Inc.

  9. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  10. Process attributes in bio-ontologies

    Directory of Open Access Journals (Sweden)

    Andrade André Q

    2012-08-01

    Full Text Available Abstract Background Biomedical processes can provide essential information about the (mal- functioning of an organism and are thus frequently represented in biomedical terminologies and ontologies, including the GO Biological Process branch. These processes often need to be described and categorised in terms of their attributes, such as rates or regularities. The adequate representation of such process attributes has been a contentious issue in bio-ontologies recently; and domain ontologies have correspondingly developed ad hoc workarounds that compromise interoperability and logical consistency. Results We present a design pattern for the representation of process attributes that is compatible with upper ontology frameworks such as BFO and BioTop. Our solution rests on two key tenets: firstly, that many of the sorts of process attributes which are biomedically interesting can be characterised by the ways that repeated parts of such processes constitute, in combination, an overall process; secondly, that entities for which a full logical definition can be assigned do not need to be treated as primitive within a formal ontology framework. We apply this approach to the challenge of modelling and automatically classifying examples of normal and abnormal rates and patterns of heart beating processes, and discuss the expressivity required in the underlying ontology representation language. We provide full definitions for process attributes at increasing levels of domain complexity. Conclusions We show that a logical definition of process attributes is feasible, though limited by the expressivity of DL languages so that the creation of primitives is still necessary. This finding may endorse current formal upper-ontology frameworks as a way of ensuring consistency, interoperability and clarity.

  11. Unsupervised seismic facies analysis with spatial constraints using regularized fuzzy c-means

    Science.gov (United States)

    Song, Chengyun; Liu, Zhining; Cai, Hanpeng; Wang, Yaojun; Li, Xingming; Hu, Guangmin

    2017-12-01

    Seismic facies analysis techniques combine classification algorithms and seismic attributes to generate a map that describes main reservoir heterogeneities. However, most of the current classification algorithms only view the seismic attributes as isolated data regardless of their spatial locations, and the resulting map is generally sensitive to noise. In this paper, a regularized fuzzy c-means (RegFCM) algorithm is used for unsupervised seismic facies analysis. Due to the regularized term of the RegFCM algorithm, the data whose adjacent locations belong to same classification will play a more important role in the iterative process than other data. Therefore, this method can reduce the effect of seismic data noise presented in discontinuous regions. The synthetic data with different signal/noise values are used to demonstrate the noise tolerance ability of the RegFCM algorithm. Meanwhile, the fuzzy factor, the neighbour window size and the regularized weight are tested using various values, to provide a reference of how to set these parameters. The new approach is also applied to a real seismic data set from the F3 block of the Netherlands. The results show improved spatial continuity, with clear facies boundaries and channel morphology, which reveals that the method is an effective seismic facies analysis tool.

  12. Analysis of Logic Programs Using Regular Tree Languages

    DEFF Research Database (Denmark)

    Gallagher, John Patrick

    2012-01-01

    The eld of nite tree automata provides fundamental notations and tools for reasoning about set of terms called regular or recognizable tree languages. We consider two kinds of analysis using regular tree languages, applied to logic programs. The rst approach is to try to discover automatically...... a tree automaton from a logic program, approximating its minimal Herbrand model. In this case the input for the analysis is a program, and the output is a tree automaton. The second approach is to expose or check properties of the program that can be expressed by a given tree automaton. The input...... to the analysis is a program and a tree automaton, and the output is an abstract model of the program. These two contrasting abstract interpretations can be used in a wide range of analysis and verication problems....

  13. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  14. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  15. Access to serviced land for the urban poor: the regularization paradox in Mexico

    Directory of Open Access Journals (Sweden)

    Alfonso Iracheta Cenecorta

    2000-01-01

    Full Text Available The insufficient supply of serviced land at affordable prices for the urban poor and the need for regularization of the consequent illegal occupations in urban areas are two of the most important issues on the Latin American land policy agenda. Taking a structural/integrated view on the functioning of the urban land market in Latin America, this paper discusses the nexus between the formal and the informal land markets. It thus exposes the perverse feedback effects that curative regularization policies may have on the process by which irregularity is produced in the first place. The paper suggests that a more effective approach to the provision of serviced land for the poor cannot be resolved within the prevailing (curative regularization programs. These programs should have the capacity to mobilize the resources that do exist into a comprehensive program that links regularization with fiscal policy, including the exploration of value capture mechanisms.

  16. Regularities of radiorace formation in yeasts

    International Nuclear Information System (INIS)

    Korogodin, V.I.; Bliznik, K.M.; Kapul'tsevich, Yu.G.; Petin, V.G.; Akademiya Meditsinskikh Nauk SSSR, Obninsk. Nauchno-Issledovatel'skij Inst. Meditsinskoj Radiologii)

    1977-01-01

    Two strains of diploid yeast, namely, Saccharomyces ellipsoides, Megri 139-B, isolated under natural conditions, and Saccharomyces cerevisiae 5a x 3Bα, heterozygous by genes ade 1 and ade 2, were exposed to γ-quanta of Co 60 . The content of cells-saltants forming colonies with changed morphology, that of the nonviable cells, cells that are respiration mutants, and cells-recombinants by gene ade 1 and ade 2, has been determined. A certain regularity has been revealed in the distribution among the colonies of cells of the four types mentioned above: the higher the content of cells of some one of the types, the higher that of the cells having other hereditary changes

  17. The mortality experience of a group of Newfoundland fluorspar miners exposed to Rn progeny

    International Nuclear Information System (INIS)

    Morrison, H.; Semenciw, R.; Mao, Y.; Wigle, D.

    1988-02-01

    A cohort study of the mortality experience (1950-1984) of 1,772 Newfoundland fluorspar miners occupationally exposed to high levels of radon daughters has been conducted using two control groups (surface workers and Newfoundland males). Observed numbers of cancers of the lung, salivary gland and buccal cavity/pharynx were significantly elevated among underground miners. A highly significant relationship was noted between radon daughter exposure and risk of dying of lung cancer; the small numbers of salivary gland (n = 2) and buccal cavity/pharynx cancers (n = 6) precluded meaningful analysis of dose-response. Also significantly elevated among underground miners were deaths from silicosis and pneumoconioses. No statistically significant excess was found for any cause of death among surface workers. Using external controls, attributable and relative risk coefficients for lung cancer were estimated as 6.3 per working level month per million person-years and 0.89 percent per working level month respectively. Attributable risk coefficients were similar to some, but not all related mining studies. Relative risk coefficients were highest for those first exposed attributable risks to non-smokers. Relative risks fell sharply with age at observation whereas attributable risks were lowest in the youngest and oldest age groups. Using the risk coefficients from the present study, a miner exposed for 30 years at 4 WLM per year from age 20 has a risk of 7,366 per 100,000 of dying of lung cancer by age 70 using the relative risk model and a risk of 6,371 per 100,000 using the attributable risk model. This compares to 3,740 per 100,000 for a non-exposed male. 85 refs

  18. Environmental Monitoring Of Microbiological Laboratory: Expose Plate Method

    International Nuclear Information System (INIS)

    Yahaya Talib; Othman Mahmud; Noraisyah Mohd Yusof; Asmah Mohibat; Muhamad Syazwan Zulkifli

    2013-01-01

    Monitoring of microorganism is important and conducted regularly on environment of microbiological laboratory at Medical Technology Division. Its objective is to ensure the quality of working environment is maintained according to microbial contamination, consequently to assure the quality of microbiological tests. This paper presents report of environmental monitoring since year 2007. The test involved was bacterial colony counts after the growth media was exposed to air at identified location. (author)

  19. Graded effects of regularity in language revealed by N400 indices of morphological priming.

    Science.gov (United States)

    Kielar, Aneta; Joanisse, Marc F

    2010-07-01

    Differential electrophysiological effects for regular and irregular linguistic forms have been used to support the theory that grammatical rules are encoded using a dedicated cognitive mechanism. The alternative hypothesis is that language systematicities are encoded probabilistically in a way that does not categorically distinguish rule-like and irregular forms. In the present study, this matter was investigated more closely by focusing specifically on whether the regular-irregular distinction in English past tenses is categorical or graded. We compared the ERP priming effects of regulars (baked-bake), vowel-change irregulars (sang-sing), and "suffixed" irregulars that display a partial regularity (suffixed irregular verbs, e.g., slept-sleep), as well as forms that are related strictly along formal or semantic dimensions. Participants performed a visual lexical decision task with either visual (Experiment 1) or auditory prime (Experiment 2). Stronger N400 priming effects were observed for regular than vowel-change irregular verbs, whereas suffixed irregulars tended to group with regular verbs. Subsequent analyses decomposed early versus late-going N400 priming, and suggested that differences among forms can be attributed to the orthographic similarity of prime and target. Effects of morphological relatedness were observed in the later-going time period, however, we failed to observe true regular-irregular dissociations in either experiment. The results indicate that morphological effects emerge from the interaction of orthographic, phonological, and semantic overlap between words.

  20. Regular Nanoscale Protein Patterns via Directed Adsorption through Self-Assembled DNA Origami Masks.

    Science.gov (United States)

    Ramakrishnan, Saminathan; Subramaniam, Sivaraman; Stewart, A Francis; Grundmeier, Guido; Keller, Adrian

    2016-11-16

    DNA origami has become a widely used method for synthesizing well-defined nanostructures with promising applications in various areas of nanotechnology, biophysics, and medicine. Recently, the possibility to transfer the shape of single DNA origami nanostructures into different materials via molecular lithography approaches has received growing interest due to the great structural control provided by the DNA origami technique. Here, we use ordered monolayers of DNA origami nanostructures with internal cavities on mica surfaces as molecular lithography masks for the fabrication of regular protein patterns over large surface areas. Exposure of the masked sample surface to negatively charged proteins results in the directed adsorption of the proteins onto the exposed surface areas in the holes of the mask. By controlling the buffer and adsorption conditions, the protein coverage of the exposed areas can be varied from single proteins to densely packed monolayers. To demonstrate the versatility of this approach, regular nanopatterns of four different proteins are fabricated: the single-strand annealing proteins Redβ and Sak, the iron-storage protein ferritin, and the blood protein bovine serum albumin (BSA). We furthermore demonstrate the desorption of the DNA origami mask after directed protein adsorption, which may enable the fabrication of hierarchical patterns composed of different protein species. Because selectivity in adsorption is achieved by electrostatic interactions between the proteins and the exposed surface areas, this approach may enable also the large-scale patterning of other charged molecular species or even nanoparticles.

  1. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  2. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  3. Hessian-regularized co-training for social activity recognition.

    Science.gov (United States)

    Liu, Weifeng; Li, Yang; Lin, Xu; Tao, Dacheng; Wang, Yanjiang

    2014-01-01

    Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA) dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms.

  4. Hessian-regularized co-training for social activity recognition.

    Directory of Open Access Journals (Sweden)

    Weifeng Liu

    Full Text Available Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms.

  5. Japanese Pupils’ Attribution of their Perceived Mathematics Performance and the Relationships Between their Attribution of Mathematics Performance and their Affective Attitudes Promoted by Different Teaching Methods

    Directory of Open Access Journals (Sweden)

    Tomomi Saeki

    2006-04-01

    Full Text Available This research used a questionnaire survey to explore the relationship between pupils’ attribution of their perceived mathematics performance and their affective attitudes towards mathematics learning as promoted by the different teaching methods they were exposed to in their mathematics classes. Both 5th and 8th graders attributed their success in learning mathematics to effort, although support from the teacher and support at home were also perceived as important factors in their success. The 5th graders and 8th graders overall gave effort-based attributions in the case of failure, while for 5th graders, ability was regarded as being as important as effort, in attributing failure in mathematics learning. Pupils who attributed their success in mathematics learning to effort, support at school and home, preferred teacher explanation and reading a textbook as learning strategies, while those attributing it to their ability preferred Individual work. Where pupils attributed success to luck, this seemed to have a negative effect on their affective attitudes towards mathematics learning as promoted by different teaching methods, while attributing failure to luck seemed to have positive effect. Attributing failure to poor teaching seemed to have a negative effect on their perception of teacher explanation. The relationships between pupil effort or ability based attributions of failure and their preference for different teaching methods were not clear. Adopting various teaching methods in mathematics classes would seem to support pupils who have different attribution styles.

  6. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  7. On the Detection of Fake Certificates via Attribute Correlation

    Directory of Open Access Journals (Sweden)

    Xiaojing Gu

    2015-06-01

    Full Text Available Transport Layer Security (TLS and its predecessor, SSL, are important cryptographic protocol suites on the Internet. They both implement public key certificates and rely on a group of trusted certificate authorities (i.e., CAs for peer authentication. Unfortunately, the most recent research reveals that, if any one of the pre-trusted CAs is compromised, fake certificates can be issued to intercept the corresponding SSL/TLS connections. This security vulnerability leads to catastrophic impacts on SSL/TLS-based HTTPS, which is the underlying protocol to provide secure web services for e-commerce, e-mails, etc. To address this problem, we design an attribute dependency-based detection mechanism, called SSLight. SSLight can expose fake certificates by checking whether the certificates contain some attribute dependencies rarely occurring in legitimate samples. We conduct extensive experiments to evaluate SSLight and successfully confirm that SSLight can detect the vast majority of fake certificates issued from any trusted CAs if they are compromised. As a real-world example, we also implement SSLight as a Firefox add-on and examine its capability of exposing existent fake certificates from DigiNotar and Comodo, both of which have made a giant impact around the world.

  8. Application of L1/2 regularization logistic method in heart disease diagnosis.

    Science.gov (United States)

    Zhang, Bowen; Chai, Hua; Yang, Ziyi; Liang, Yong; Chu, Gejin; Liu, Xiaoying

    2014-01-01

    Heart disease has become the number one killer of human health, and its diagnosis depends on many features, such as age, blood pressure, heart rate and other dozens of physiological indicators. Although there are so many risk factors, doctors usually diagnose the disease depending on their intuition and experience, which requires a lot of knowledge and experience for correct determination. To find the hidden medical information in the existing clinical data is a noticeable and powerful approach in the study of heart disease diagnosis. In this paper, sparse logistic regression method is introduced to detect the key risk factors using L(1/2) regularization on the real heart disease data. Experimental results show that the sparse logistic L(1/2) regularization method achieves fewer but informative key features than Lasso, SCAD, MCP and Elastic net regularization approaches. Simultaneously, the proposed method can cut down the computational complexity, save cost and time to undergo medical tests and checkups, reduce the number of attributes needed to be taken from patients.

  9. Implicit learning of non-linguistic and linguistic regularities in children with dyslexia.

    Science.gov (United States)

    Nigro, Luciana; Jiménez-Fernández, Gracia; Simpson, Ian C; Defior, Sylvia

    2016-07-01

    One of the hallmarks of dyslexia is the failure to automatise written patterns despite repeated exposure to print. Although many explanations have been proposed to explain this problem, researchers have recently begun to explore the possibility that an underlying implicit learning deficit may play a role in dyslexia. This hypothesis has been investigated through non-linguistic tasks exploring implicit learning in a general domain. In this study, we examined the abilities of children with dyslexia to implicitly acquire positional regularities embedded in both non-linguistic and linguistic stimuli. In experiment 1, 42 children (21 with dyslexia and 21 typically developing) were exposed to rule-governed shape sequences; whereas in experiment 2, a new group of 42 children were exposed to rule-governed letter strings. Implicit learning was assessed in both experiments via a forced-choice task. Experiments 1 and 2 showed a similar pattern of results. ANOVA analyses revealed no significant differences between the dyslexic and the typically developing group, indicating that children with dyslexia are not impaired in the acquisition of simple positional regularities, regardless of the nature of the stimuli. However, within group t-tests suggested that children from the dyslexic group could not transfer the underlying positional rules to novel instances as efficiently as typically developing children.

  10. Frequency of marriage and live birth among survivors prenatally exposed to the atomic bomb

    International Nuclear Information System (INIS)

    Blot, W.J.; Shimizu, Y.; Kato, H.; Miller, R.W.

    1975-01-01

    Frequency of marriage and birth as of January 1973 was determined for persons exposed in utero to the atomic bombs in 1945 and for controls. The marriage rate was lower in persons heavily exposed in utero than in the non-exposed or lightly exposed. This difference is attributed partly to the lesser marriageability of persons with mental retardation who are significantly more numerous among the heavily exposed, and partly to unmeasured variables, possibly including social discrimination against survivors of the atomic bomb. No consistent relation was observed between radiation exposure and three reproductive indices: childless marriages, number of births, and interval between marriage and first birth

  11. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  12. Clinical findings on in utero exposed microcephalic children

    Energy Technology Data Exchange (ETDEWEB)

    Tabuchi, Akira; Hirai, Tsuyoshi; Nakagawa, Shigeru; Shimada, Katsunobu; Fujito, Junro

    1966-12-24

    Since animal experiments have shown that microcephaly is induced by fetal exposure to radiation and microcephaly has been found in children of mothers exposed to x-ray therapy during pregnancy (Murphy et al), the main cause of microcephaly in children exposed in utero to the A-bomb is considered to be ionizing radiation. Wood et al reported the increased incidence of microcephaly and mental retardation in children exposed in utero at proximal distances which they felt could not be attributed to any other known variable. ABCC has recently concluded that the effect of in utero exposure is primarily due to the immediate effect of radiation upon the fetuses although in A-bomb exposure the physical injury to the mother due to the A-bomb cannot be completely ignored. Our survey likewise revealed an increase of microcephaly in children exposed early in pregnancy at less than 15 weeks at closer distances than 1500 m. Thus, we presume that A-bomb radiation increases the incidence of microcephaly. 16 references, 8 tables.

  13. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  14. Returning Special Education Students to Regular Classrooms: Externalities on Peers’ Outcomes

    DEFF Research Database (Denmark)

    Rangvid, Beatrice Schindler

    Policy reforms to boost full inclusion and conventional return flows send students with special educational needs (SEN) from segregated settings to regular classrooms. Using full population micro data from Denmark, I investigate whether becoming exposed to returning SEN students affects...... on test score gains of moderate size (-0.036 SD), while no significant effect is found in non-reform years. The results are robust to sensitivity checks. The negative exposure effect is significant only for boys, but does not differ by parental education or grade-level....

  15. Rule-based learning of regular past tense in children with specific language impairment.

    Science.gov (United States)

    Smith-Lock, Karen M

    2015-01-01

    The treatment of children with specific language impairment was used as a means to investigate whether a single- or dual-mechanism theory best conceptualizes the acquisition of English past tense. The dual-mechanism theory proposes that regular English past-tense forms are produced via a rule-based process whereas past-tense forms of irregular verbs are stored in the lexicon. Single-mechanism theories propose that both regular and irregular past-tense verbs are stored in the lexicon. Five 5-year-olds with specific language impairment received treatment for regular past tense. The children were tested on regular past-tense production and third-person singular "s" twice before treatment and once after treatment, at eight-week intervals. Treatment consisted of one-hour play-based sessions, once weekly, for eight weeks. Crucially, treatment focused on different lexical items from those in the test. Each child demonstrated significant improvement on the untreated past-tense test items after treatment, but no improvement on the untreated third-person singular "s". Generalization to untreated past-tense verbs could not be attributed to a frequency effect or to phonological similarity of trained and tested items. It is argued that the results are consistent with a dual-mechanism theory of past-tense inflection.

  16. Dyadic Event Attribution in Social Networks with Mixtures of Hawkes Processes.

    Science.gov (United States)

    Li, Liangda; Zha, Hongyuan

    2013-01-01

    In many applications in social network analysis, it is important to model the interactions and infer the influence between pairs of actors, leading to the problem of dyadic event modeling which has attracted increasing interests recently. In this paper we focus on the problem of dyadic event attribution, an important missing data problem in dyadic event modeling where one needs to infer the missing actor-pairs of a subset of dyadic events based on their observed timestamps. Existing works either use fixed model parameters and heuristic rules for event attribution, or assume the dyadic events across actor-pairs are independent. To address those shortcomings we propose a probabilistic model based on mixtures of Hawkes processes that simultaneously tackles event attribution and network parameter inference, taking into consideration the dependency among dyadic events that share at least one actor. We also investigate using additive models to incorporate regularization to avoid overfitting. Our experiments on both synthetic and real-world data sets on international armed conflicts suggest that the proposed new method is capable of significantly improve accuracy when compared with the state-of-the-art for dyadic event attribution.

  17. The perception of regularity in an isochronous stimulus in zebra finches (Taeniopygia guttata) and humans.

    Science.gov (United States)

    van der Aa, Jeroen; Honing, Henkjan; ten Cate, Carel

    2015-06-01

    Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous and an irregular stimulus. However, when the tempo of the isochronous stimulus is changed, it is no longer treated as similar to the training stimulus. Training with three isochronous and three irregular stimuli did not result in improvement of the generalization. In contrast, humans, exposed to the same stimuli, readily generalized across tempo changes. Our results suggest that zebra finches distinguish the different stimuli by learning specific local temporal features of each individual stimulus rather than attending to the global structure of the stimuli, i.e., to the temporal regularity. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. An evolution of image source camera attribution approaches.

    Science.gov (United States)

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. An investigation of the general regularity of size dependence of reaction kinetics of nanoparticles

    International Nuclear Information System (INIS)

    Cui, Zixiang; Duan, Huijuan; Xue, Yongqiang; Li, Ping

    2015-01-01

    In the processes of preparation and application of nanomaterials, the chemical reactions of nanoparticles are often involved, and the size of nanoparticles has dramatic influence on the reaction kinetics. Nevertheless, there are many conflicts on regularities of size dependence of reaction kinetic parameters, and these conflicts have not been explained so far. In this paper, taking the reaction of nano-ZnO (average diameter is from 20.96 to 53.31 nm) with acrylic acid solution as a system, the influence regularities of the particle size on the kinetic parameters were researched. The regularities were consistent with that in most literatures, but inconsistent with that in a few of literatures, the reasons for the conflicts were interpreted. The reasons can be attributed to two factors: one is improper data processing for fewer data points, and the other is the difference between solid particles and porous particles. A general regularity of the size dependence of reaction kinetics for solid particles was obtained. The regularity shows that with the size of nanoparticles decreasing, the rate constant and the reaction order increase, while the apparent activation energy and the pre-exponential factor decrease; and the relationships of the logarithm of rate constant, the logarithm of pre-exponential factor, and the apparent activation energy to the reciprocal of the particle size are linear, respectively

  20. Age-related patterns of drug use initiation among polydrug using regular psychostimulant users.

    Science.gov (United States)

    Darke, Shane; Kaye, Sharlene; Torok, Michelle

    2012-09-01

    To determine age-related patterns of drug use initiation, drug sequencing and treatment entry among regular psychostimulant users. Cross-sectional study of 269 regular psychostimulant users, administered a structured interview examining onset of use for major licit and illicit drugs. The mean age at first intoxication was not associated with age or gender. In contrast, younger age was associated with earlier ages of onset for all of the illicit drug classes. Each additional year of age was associated with a 4 month increase in onset age for methamphetamine, and 3 months for heroin. By the age of 17, those born prior to 1961 had, on average, used only tobacco and alcohol, whereas those born between 1986 and 1990 had used nine different drug classes. The period between initial use and the transition to regular use, however, was stable. Age was also negatively correlated with both age at initial injection and regular injecting. Onset sequences, however, remained stable. Consistent with the age-related patterns of drug use, each additional year of age associated with a 0.47 year increase in the age at first treatment. While the age at first intoxication appeared stable, the trajectory through illicit drug use was substantially truncated. The data indicate that, at least among those who progress to regular illicit drug use, younger users are likely to be exposed to far broader polydrug use in their teens than has previously been the case. © 2012 Australasian Professional Society on Alcohol and other Drugs.

  1. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  2. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  3. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  4. Healthcare costs attributable to secondhand smoke exposure at home for U.S. adults.

    Science.gov (United States)

    Yao, Tingting; Sung, Hai-Yen; Wang, Yingning; Lightwood, James; Max, Wendy

    2018-03-01

    To estimate healthcare costs attributable to secondhand smoke (SHS) exposure at home among nonsmoking adults (18+) in the U.S. We analyzed data on nonsmoking adults (N=67,735) from the 2000, 2005, and 2010 (the latest available data on SHS exposure at home) U.S. National Health Interview Surveys. This study was conducted from 2015 to 2017. We examined hospital nights, home care visits, doctor visits, and emergency room (ER) visits. For each, we analyzed the association of SHS exposure at home with healthcare utilization with a Zero-Inflated Poisson regression model controlling for socio-demographic and other risk characteristics. Excess healthcare utilization attributable to SHS exposure at home was determined and multiplied by unit costs derived from the 2014 Medical Expenditures Panel Survey to determine annual SHS-attributable healthcare costs. SHS exposure at home was positively associated with hospital nights and ER visits, but was not statistically associated with home care visits and doctor visits. Exposed adults had 1.28 times more hospital nights and 1.16 times more ER visits than non-exposed adults. Annual SHS-attributable healthcare costs totaled $4.6 billion (including $3.8 billion for hospital nights and $0.8 billion for ER visits, 2014 dollars) in 2000, $2.1 billion (including $1.8 billion for hospital nights and $0.3 billion for ER visits) in 2005, and $1.9 billion (including $1.6 billion for hospital nights and $0.4 billion for ER visits) in 2010. SHS-attributable costs remain high, but have fallen over time. Tobacco control efforts are needed to further reduce SHS exposure at home and associated healthcare costs. Copyright © 2017. Published by Elsevier Inc.

  5. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  6. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  7. Regular-soda intake independent of weight status is associated with asthma among US high school students.

    Science.gov (United States)

    Park, Sohyun; Blanck, Heidi M; Sherry, Bettylou; Jones, Sherry Everett; Pan, Liping

    2013-01-01

    Limited research shows an inconclusive association between soda intake and asthma, potentially attributable to certain preservatives in sodas. This cross-sectional study examined the association between regular (nondiet)-soda intake and current asthma among a nationally representative sample of high school students. Analysis was based on the 2009 national Youth Risk Behavior Survey and included 15,960 students (grades 9 through 12) with data for both regular-soda intake and current asthma status. The outcome measure was current asthma (ie, told by doctor/nurse that they had asthma and still have asthma). The main exposure variable was regular-soda intake (ie, drank a can/bottle/glass of soda during the 7 days before the survey). Multivariable logistic regression was used to estimate the adjusted odds ratios for regular-soda intake with current asthma after controlling for age, sex, race/ethnicity, weight status, and current cigarette use. Overall, 10.8% of students had current asthma. In addition, 9.7% of students who did not drink regular soda had current asthma, and 14.7% of students who drank regular soda three or more times per day had current asthma. Compared with those who did not drink regular soda, odds of having current asthma were higher among students who drank regular soda two times per day (adjusted odds ratio=1.28; 95% CI 1.02 to 1.62) and three or more times per day (adjusted odds ratio=1.64; 95% CI 1.25 to 2.16). The association between high regular-soda intake and current asthma suggests efforts to reduce regular-soda intake among youth might have benefits beyond improving diet quality. However, this association needs additional research, such as a longitudinal examination. Published by Elsevier Inc.

  8. 5-D interpolation with wave-front attributes

    Science.gov (United States)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that

  9. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  10. Technical attributes, health attribute, consumer attributes and their roles in adoption intention of healthcare wearable technology.

    Science.gov (United States)

    Zhang, Min; Luo, Meifen; Nie, Rui; Zhang, Yan

    2017-12-01

    This paper aims to explore factors influencing the healthcare wearable technology adoption intention from perspectives of technical attributes (perceived convenience, perceived irreplaceability, perceived credibility and perceived usefulness), health attribute (health belief) and consumer attributes (consumer innovativeness, conspicuous consumption, informational reference group influence and gender difference). By integrating technology acceptance model, health belief model, snob effect and conformity and reference group theory, hypotheses and research model are proposed. The empirical investigation (N=436) collects research data through questionnaire. Results show that the adoption intention of healthcare wearable technology is influenced by technical attributes, health attribute and consumer attributes simultaneously. For technical attributes, perceived convenience and perceived credibility both positively affect perceived usefulness, and perceived usefulness influences adoption intention. The relation between perceived irreplaceability and perceived usefulness is only supported by males. For health attribute, health belief affects perceived usefulness for females. For consumer attributes, conspicuous consumption and informational reference group influence can significantly moderate the relation between perceived usefulness and adoption intention and the relation between consumer innovativeness and adoption intention respectively. What's more, consumer innovativeness significantly affects adoption intention for males. This paper aims to discuss technical attributes, health attribute and consumer attributes and their roles in the adoption intention of healthcare wearable technology. Findings may provide enlightenment to differentiate product developing and marketing strategies and provide some implications for clinical medicine. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  12. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  13. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  14. "Polite People" and Military Meekness: the Attributes of Military Ethics

    Directory of Open Access Journals (Sweden)

    Pavel V. Didov

    2016-12-01

    Full Text Available The article analyzes the phenomenon of "polite people" from the point of view of the history and theory of ethical thought. Identify and specify ethical principles that form the basis of military courtesy. On the basis of the revealed regularities, the study proves that ethics is impossible without a certain power attributes, which constitute its core. In relation to the traditions of Russian warriors revealed the key role to their formation of the Orthodox ethics and the military of meekness. The obtained results can serve as material for educational activities for the formation of fighting spirit.

  15. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  16. 75 FR 76006 - Regular Meeting

    Science.gov (United States)

    2010-12-07

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...

  17. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  18. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  19. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  20. Geometric continuum regularization of quantum field theory

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1989-01-01

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs

  1. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  2. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  3. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  4. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  5. Urinary 1-Hydroxypyrene Levels in Workers Exposed to Polycyclic Aromatic Hydrocarbon from Rubber Wood Burning

    Directory of Open Access Journals (Sweden)

    Thitiworn Choosong

    2014-06-01

    Conclusion: The urinary 1-OHP levels of workers exposed to PAHs were high. The accumulation of 1-OHP in the body was not clear although the workers had long working hours with few days off during their working experience. Therefore, a regular day off schedule and rotation shift work during high productive RSS should be set for RSS workers.

  6. The potential DNA toxic changes among workers exposed to antimony trioxide.

    Science.gov (United States)

    El Shanawany, Safaa; Foda, Nermine; Hashad, Doaa I; Salama, Naglaa; Sobh, Zahraa

    2017-05-01

    Occupational exposure to antimony has gained much interest when specific toxic effects were noticed among workers processing antimony. Thus, the aim of the present work was to investigate the potential DNA oxidative damage occurring among Egyptian workers occupationally exposed to antimony trioxide. The study was conducted on 25 subjects exposed to antimony trioxide while working in the polymerization process of polyester in Misrayon and Polyester Fiber Company, KafrEldawwar, Beheira, Egypt. Urinary antimony levels were assessed using inductive coupled plasma-optical emission spectrometry (ICP-OES) and considered as a biological exposure index. DNA damage and total oxidant capacity (TOC) were assessed using ELISA. DNA damage was detected in the form of increased apurinic/apyrimidinic (AP) sites among antimony trioxide-exposed workers compared to control subjects, but it could not be explained by oxidative mechanisms due to lack of significant correlation between DNA damage and measured TOC. Antimony trioxide might have a genotoxic impact on occupationally exposed workers which could not be attributed to oxidative stress in the studied cases.

  7. Quantifying how smokers value attributes of electronic cigarettes.

    Science.gov (United States)

    Nonnemaker, James; Kim, Annice E; Lee, Youn Ok; MacMonegle, Anna

    2016-04-01

    Rates of electronic cigarette (e-cigarette) use have increased quickly among US adults (3.3% in 2010 to 8.5% in 2013) and youth (4.5% in 2013 to 13.4% in 2014). As state and local governments consider regulatory policies, understanding what smokers believe about e-cigarettes and how they value e-cigarettes is important. Using data from a convenience sample of Florida adult smokers (N=765), we investigated the value smokers place on specific attributes of e-cigarettes (availability of flavours, effectiveness of e-cigarettes as a cessation aid, healthier alternative to regular cigarettes, ability to use e-cigarettes in public places) by asking smokers how much they would be willing to pay for e-cigarettes with and without each of these attributes. For cigarette-only and dual users, losing the ability to use an e-cigarette as a quit aid and losing the harm reduction of an e-cigarette significantly reduced the price respondents were willing to pay for an e-cigarette. For cigarette-only users, not being able to use an e-cigarette indoors and losing flavours also significantly reduced the price respondents were willing to pay for an e-cigarette. Our results suggest that smokers value multiple attributes of e-cigarettes. Our valuation measures also appear to align with smokers' beliefs about e-cigarettes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  8. A FRAMEWORK FOR ATTRIBUTE-BASED COMMUNITY DETECTION WITH APPLICATIONS TO INTEGRATED FUNCTIONAL GENOMICS.

    Science.gov (United States)

    Yu, Han; Hageman Blair, Rachael

    2016-01-01

    Understanding community structure in networks has received considerable attention in recent years. Detecting and leveraging community structure holds promise for understanding and potentially intervening with the spread of influence. Network features of this type have important implications in a number of research areas, including, marketing, social networks, and biology. However, an overwhelming majority of traditional approaches to community detection cannot readily incorporate information of node attributes. Integrating structural and attribute information is a major challenge. We propose a exible iterative method; inverse regularized Markov Clustering (irMCL), to network clustering via the manipulation of the transition probability matrix (aka stochastic flow) corresponding to a graph. Similar to traditional Markov Clustering, irMCL iterates between "expand" and "inflate" operations, which aim to strengthen the intra-cluster flow, while weakening the inter-cluster flow. Attribute information is directly incorporated into the iterative method through a sigmoid (logistic function) that naturally dampens attribute influence that is contradictory to the stochastic flow through the network. We demonstrate advantages and the exibility of our approach using simulations and real data. We highlight an application that integrates breast cancer gene expression data set and a functional network defined via KEGG pathways reveal significant modules for survival.

  9. Regular moderate or intense exercise prevents depression-like behavior without change of hippocampal tryptophan content in chronically tryptophan-deficient and stressed mice.

    Directory of Open Access Journals (Sweden)

    Hosung Lee

    Full Text Available Regular exercise has an antidepressant effect in human subjects. Studies using animals have suggested that the antidepressant effect of exercise is attributable to an increase of brain 5-hydroxytryptamine (5-HT; however, the precise mechanism underlying the antidepressant action via exercise is unclear. In contrast, the effect of 5-HT on antidepressant activity has not been clarified, in part because the therapeutic response to antidepressant drugs has a time lag in spite of the rapid increase of brain 5-HT upon administration of these drugs. This study was designed to investigate the contribution of brain 5-HT to the antidepressant effect of exercise. Mice were fed a tryptophan-deficient diet and stressed using chronic unpredictable stress (CUS for 4 weeks with or without the performance of either moderate or intense exercise on a treadmill 3 days per week. The findings demonstrated that the onset of depression-like behavior is attributable not to chronic reduction of 5-HT but to chronic stress. Regular exercise, whether moderate or intense, prevents depression-like behavior with an improvement of adult hippocampal cell proliferation and survival and without the recovery of 5-HT. Concomitantly, the mice that exercised showed increased hippocampal noradrenaline. Regular exercise prevents the impairment of not long-term memory but short-term memory in a 5-HT-reduced state. Together, these findings suggest that: (1 chronic reduction of brain 5-HT may not contribute to the onset of depression-like behavior; (2 regular exercise, whether moderate or intense, prevents the onset of chronic stress-induced depression-like behavior independent of brain 5-HT and dependent on brain adrenaline; and (3 regular exercise prevents chronic tryptophan reduction-induced impairment of not long-term but short-term memory.

  10. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  11. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  12. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  13. Objective Self Awareness, Self-Esteem and Causal Attributions for Success and Failure.

    Science.gov (United States)

    2 (high vs. low self - esteem subjects) design was employed. Objective self -awareness was manipulated by exposing subjects to their image on a wall...The investigation has applied the theory of objective self awareness (Duval and Wicklund, 1972) to the study of causal attributions that actors make...for their past performance. A 2 (male vs. female subject) by 2 (success vs. failure) by 3 (objective self -awareness vs. control vs. time control) by

  14. Let another praise you? The effects of source and attributional content on responses to group-directed praise.

    Science.gov (United States)

    Rabinovich, Anna; Morton, Thomas A; Crook, Michael; Travers, Claire

    2012-12-01

    Not all types of praise may be equally stimulating. Instead, positive feedback carries different meaning depending on the source that delivers it and the attributions for success that it contains. In the present study, source (in-group vs. out-group) of praise and its content (attributing success to internal vs. external causes) were experimentally manipulated. The results revealed that there was a significant interaction between source and content of praise on performance in a praise-related task. As predicted, participants exposed to out-group praise were motivated by external attributions for success rather than by internal attributions. Conversely, when praise originated from an in-group source, the attributional content of praise did not affect performance. This effect of source and content of praise on relevant behaviour was mediated by willingness to protect group image. Thus, responses to praise are contingent on what it implies about group success--corresponding to patterns demonstrated in previous work on group-directed criticism. ©2012 The British Psychological Society.

  15. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  16. Improving the Accuracy of Attribute Extraction using the Relatedness between Attribute Values

    Science.gov (United States)

    Bollegala, Danushka; Tani, Naoki; Ishizuka, Mitsuru

    Extracting attribute-values related to entities from web texts is an important step in numerous web related tasks such as information retrieval, information extraction, and entity disambiguation (namesake disambiguation). For example, for a search query that contains a personal name, we can not only return documents that contain that personal name, but if we have attribute-values such as the organization for which that person works, we can also suggest documents that contain information related to that organization, thereby improving the user's search experience. Despite numerous potential applications of attribute extraction, it remains a challenging task due to the inherent noise in web data -- often a single web page contains multiple entities and attributes. We propose a graph-based approach to select the correct attribute-values from a set of candidate attribute-values extracted for a particular entity. First, we build an undirected weighted graph in which, attribute-values are represented by nodes, and the edge that connects two nodes in the graph represents the degree of relatedness between the corresponding attribute-values. Next, we find the maximum spanning tree of this graph that connects exactly one attribute-value for each attribute-type. The proposed method outperforms previously proposed attribute extraction methods on a dataset that contains 5000 web pages.

  17. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  18. Attributability of health effects at low radiation doses

    International Nuclear Information System (INIS)

    Gonzalez, Abel

    2008-01-01

    Full text: A controversy still persists on whether health effects can be alleged from radiation exposure situations involving low radiation doses (e.g. below the international dose limits for the public). Arguments have evolved around the validity of the dose-response representation that is internationally used for radiation protection purposes, namely the so-called linear-non-threshold (LNT) model. The debate has been masked by the intrinsic randomness of radiation interaction at the cellular level and also by gaps in the relevant scientific knowledge on the development and expression of health effects. There has also been a vague use, abuse, and misuse of radiation-related risk concepts and quantities and their associated uncertainties. As a result, there is some ambiguity in the interpretation of the phenomena and a general lack of awareness of the implications for a number of risk-causation qualities, namely its attributes and characteristics. In particular, the LNT model has been used not only for protection purposes but also for blindly attributing actual effects to specific exposure situations. The latter has been discouraged as being a misuse of the model, but the supposed incorrectness has not been clearly proven. The paper will endeavour to demonstrate unambiguously the following thesis in relation to health effects due to low radiation doses: 1) Their existence is highly plausible. A number of epidemiological statistical assessments of sufficiently large exposed populations show that, under certain conditions, the prevalence of the effects increases with dose. From these assessments, it can be hypothesized that the occurrence of the effects at any dose, however small, appears decidedly worthy of belief. While strictly the evidence does not allow to conclude that a threshold dose level does not exist either. In fact, a formal quantitative uncertainty analysis, combining the different uncertain components of estimated radiation-related risk, with and

  19. Attributability of Health Effects at Low Radiation Doses

    International Nuclear Information System (INIS)

    Gonzalez, A.J.

    2011-01-01

    Full text: A controversy still persists on whether health effects can be alleged from radiation exposure situations involving low radiation doses (e.g. below the international dose limits for the public). Arguments have evolved around the validity of the dose response representation that is internationally used for radiation protection purposes, namely the so-called linear-non-threshold (LNT) model. The debate has been masked by the intrinsic randomness of radiation interaction at the cellular level and also by gaps in the relevant scientific knowledge on the development and expression of health effects. There has also been a vague use, abuse, and misuse of radiation-related risk concepts and quantities and their associated uncertainties. As a result, there is some ambiguity in the interpretation of the phenomena and a general lack of awareness of the implications for a number of risk-causation qualities, namely its attributes and characteristics. In particular, the LNT model has been used not only for protection purposes but also for blindly attributing actual effects to specific exposure situations. The latter has been discouraged as being a misuse of the model, but the supposed incorrectness has not been clearly proven. The paper will endeavour to demonstrate unambiguously the following thesis in relation to health effects due to low radiation doses: (i) Their existence is highly plausible. A number of epidemiological statistical assessments of sufficiently large exposed populations show that, under certain conditions, the prevalence of the effects increases with dose. From these assessments, it can be hypothesized that the occurrence of the effects at any dose, however small, appears decidedly worthy of belief. While strictly the evidence does not allow to conclude that a threshold dose level does not exist either In fact, a formal quantitative uncertainty analysis, combining the different uncertain components of estimated radiation-related risk, with and

  20. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  1. J-regular rings with injectivities

    OpenAIRE

    Shen, Liang

    2010-01-01

    A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.

  2. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  3. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  4. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  5. Regular pipeline maintenance of gas pipeline using technical operational diagnostics methods

    Energy Technology Data Exchange (ETDEWEB)

    Volentic, J [Gas Transportation Department, Slovensky plynarensky priemysel, Slovak Gas Industry, Bratislava (Slovakia)

    1998-12-31

    Slovensky plynarensky priemysel (SPP) has operated 17 487 km of gas pipelines in 1995. The length of the long-line pipelines reached 5 191 km, distribution network was 12 296 km. The international transit system of long-line gas pipelines ranged 1 939 km of pipelines of various dimensions. The described scale of transport and distribution system represents a multibillion investments stored in the ground, which are exposed to the environmental influences and to pipeline operational stresses. In spite of all technical and maintenance arrangements, which have to be performed upon operating gas pipelines, the gradual ageing takes place anyway, expressed in degradation process both in steel tube, as well as in the anti-corrosion coating. Within a certain time horizon, a consistent and regular application of methods and means of in-service technical diagnostics and rehabilitation of existing pipeline systems make it possible to save substantial investment funds, postponing the need in funds for a complex or partial reconstruction or a new construction of a specific gas section. The purpose of this presentation is to report on the implementation of the programme of in-service technical diagnostics of gas pipelines within the framework of regular maintenance of SPP s.p. Bratislava high pressure gas pipelines. (orig.) 6 refs.

  6. Regular pipeline maintenance of gas pipeline using technical operational diagnostics methods

    Energy Technology Data Exchange (ETDEWEB)

    Volentic, J. [Gas Transportation Department, Slovensky plynarensky priemysel, Slovak Gas Industry, Bratislava (Slovakia)

    1997-12-31

    Slovensky plynarensky priemysel (SPP) has operated 17 487 km of gas pipelines in 1995. The length of the long-line pipelines reached 5 191 km, distribution network was 12 296 km. The international transit system of long-line gas pipelines ranged 1 939 km of pipelines of various dimensions. The described scale of transport and distribution system represents a multibillion investments stored in the ground, which are exposed to the environmental influences and to pipeline operational stresses. In spite of all technical and maintenance arrangements, which have to be performed upon operating gas pipelines, the gradual ageing takes place anyway, expressed in degradation process both in steel tube, as well as in the anti-corrosion coating. Within a certain time horizon, a consistent and regular application of methods and means of in-service technical diagnostics and rehabilitation of existing pipeline systems make it possible to save substantial investment funds, postponing the need in funds for a complex or partial reconstruction or a new construction of a specific gas section. The purpose of this presentation is to report on the implementation of the programme of in-service technical diagnostics of gas pipelines within the framework of regular maintenance of SPP s.p. Bratislava high pressure gas pipelines. (orig.) 6 refs.

  7. Regularity in an environment produces an internal torque pattern for biped balance control.

    Science.gov (United States)

    Ito, Satoshi; Kawasaki, Haruhisa

    2005-04-01

    In this paper, we present a control method for achieving biped static balance under unknown periodic external forces whose periods are only known. In order to maintain static balance adaptively in an uncertain environment, it is essential to have information on the ground reaction forces. However, when the biped is exposed to a steady environment that provides an external force periodically, uncertain factors on the regularity with respect to a steady environment are gradually clarified using learning process, and finally a torque pattern for balancing motion is acquired. Consequently, static balance is maintained without feedback from ground reaction forces and achieved in a feedforward manner.

  8. Searchable attribute-based encryption scheme with attribute revocation in cloud storage.

    Science.gov (United States)

    Wang, Shangping; Zhao, Duqiao; Zhang, Yaling

    2017-01-01

    Attribute based encryption (ABE) is a good way to achieve flexible and secure access control to data, and attribute revocation is the extension of the attribute-based encryption, and the keyword search is an indispensable part for cloud storage. The combination of both has an important application in the cloud storage. In this paper, we construct a searchable attribute-based encryption scheme with attribute revocation in cloud storage, the keyword search in our scheme is attribute based with access control, when the search succeeds, the cloud server returns the corresponding cipher text to user and the user can decrypt the cipher text definitely. Besides, our scheme supports multiple keywords search, which makes the scheme more practical. Under the assumption of decisional bilinear Diffie-Hellman exponent (q-BDHE) and decisional Diffie-Hellman (DDH) in the selective security model, we prove that our scheme is secure.

  9. Higher derivative regularization and chiral anomaly

    International Nuclear Information System (INIS)

    Nagahama, Yoshinori.

    1985-02-01

    A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)

  10. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  11. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  12. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  13. Chemical composition and sensory analysis of peanut pastes elaborated with high-oleic and regular peanuts from Argentina

    Energy Technology Data Exchange (ETDEWEB)

    Riveros, C. G.; Mestrallet, M. G.; Nepote, V.; Grosso, N. R.

    2009-07-01

    The objective of this work was to determine the chemical composition, sensory attributes and consumer acceptance of peanut pastes prepared with the high-oleic cultivar, Granoleico (GO-P), in comparison with the regular cultivar, Tegua (T-P), of peanuts grown in Argentina. GO-P had higher oil contents (50.91%) than T-P (48.95%). GO-P and T-P did not show differences in ash and carbohydrate contents. T-P exhibit higher protein content (27.49%) than GO-P (26.68%). GO-P had significantly higher oleic and lower linoleic contents (78.50% and 4.60%, respectively) than T-P (45.80% and 33.30%, respectively). In addition, GO-P showed higher eicosenoic acid and lower palmitic acid percentages than TP. The consumer acceptance analysis did not show significant differences between samples of GO-P and T-P. In the descriptive analysis, GO-P showed a higher intensity rating in the oiliness texture attribute than in T-P. The other sensory attributes did not show significant variations between the peanut paste samples. GO-P and T-P have a significant difference in fatty acid composition. However, there were no differences in consumer acceptance and descriptive analysis between samples of peanut pastes except for the oiliness attribute. (Author) 32 refs.

  14. 75 FR 53966 - Regular Meeting

    Science.gov (United States)

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  15. Work and family life of childrearing women workers in Japan: comparison of non-regular employees with short working hours, non-regular employees with long working hours, and regular employees.

    Science.gov (United States)

    Seto, Masako; Morimoto, Kanehisa; Maruyama, Soichiro

    2006-05-01

    This study assessed the working and family life characteristics, and the degree of domestic and work strain of female workers with different employment statuses and weekly working hours who are rearing children. Participants were the mothers of preschoolers in a large Japanese city. We classified the women into three groups according to the hours they worked and their employment conditions. The three groups were: non-regular employees working less than 30 h a week (n=136); non-regular employees working 30 h or more per week (n=141); and regular employees working 30 h or more a week (n=184). We compared among the groups the subjective values of work, financial difficulties, childcare and housework burdens, psychological effects, and strains such as work and family strain, work-family conflict, and work dissatisfaction. Regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than non-regular employees. Non-regular employees were more likely to be facing financial difficulties. In particular, non-regular employees working longer hours tended to encounter socioeconomic difficulties and often lacked support from family and friends. Female workers with children may have different social backgrounds and different stressors according to their working hours and work status.

  16. Following Musical Shows: A Study with Focal Groups on Satisfaction of Musical Concerts Regular Visitors and Socialization between Them

    Directory of Open Access Journals (Sweden)

    Lúmia Massa Garcia Pires

    2017-06-01

    Full Text Available This article aimed to identify which attributes impact more significantly on the satisfaction of concerts’ regular visitors and socialization between them when inserted in these kinds of events. Thus, we used a qualitative methodology, performing focus groups. Among the main results of this study, we found, regarding satisfaction of concerts’ visitors, the attributes that most influence the public are related to services - especially for beverage supply, cleaning of bathrooms and lines formed inside the event - organization, show infrastructure and performance artists. Furthermore, considering the socialization of the visitors, we found that most respondents often go to concerts together with other people, but some did not exclude the possibility to attend the concerts alone when it comes to a familiar artist.

  17. Attributional processes in the learned helplessness paradigm: behavioral effects of global attributions.

    Science.gov (United States)

    Mikulincer, M

    1986-12-01

    Following the learned helplessness paradigm, I assessed in this study the effects of global and specific attributions for failure on the generalization of performance deficits in a dissimilar situation. Helplessness training consisted of experience with noncontingent failures on four cognitive discrimination problems attributed to either global or specific causes. Experiment 1 found that performance in a dissimilar situation was impaired following exposure to globally attributed failure. Experiment 2 examined the behavioral effects of the interaction between stable and global attributions of failure. Exposure to unsolvable problems resulted in reduced performance in a dissimilar situation only when failure was attributed to global and stable causes. Finally, Experiment 3 found that learned helplessness deficits were a product of the interaction of global and internal attribution. Performance deficits following unsolvable problems were recorded when failure was attributed to global and internal causes. Results were discussed in terms of the reformulated learned helplessness model.

  18. Health protection of persons occupationally exposed to ionising radiation in Croatia

    International Nuclear Information System (INIS)

    Zavalic, M.

    2005-01-01

    The aim of this study was to investigate the health condition of workers occupationally exposed to ionising radiation. The results for 1406 workers exposed to ionising radiations, who were regularly examined in 2004, were analysed using Statistica 5.0. The analysis included workers' case histories, frequency of illnesses and causes of temporary or permanent work disability. Of 1406 workers, 16 (1.13%) were found permanently disabled; in 11 the cause of disability was lens opacity, in 2 persistent trombocitophenia, and in 2 malignant tumour. Twenty-four workers were temporarily disabled, of whom 5 due to pregnancy. Thrombocytopenia was found in 12 men and only one woman. Anaemia was found in 4 women; dicentric chromosomes were the cause of temporary disability in one person, and tuberculosis in one person. Medical examinations of Croatian workers confirm low occupational exposure to ionising radiation. With this type of radiation, the established lens impairments could not be characterised as occupational. The two malignant tumours however were recognised as occupational diseases.(author)

  19. Extending Attribution Theory: Considering Students' Perceived Control of the Attribution Process

    Science.gov (United States)

    Fishman, Evan J.; Husman, Jenefer

    2017-01-01

    Research in attribution theory has shown that students' causal thinking profoundly affects their learning and motivational outcomes. Very few studies, however, have explored how students' attribution-related beliefs influence the causal thought process. The present study used the perceived control of the attribution process (PCAP) model to examine…

  20. Changing US Attributes After CS-US Pairings Changes CS-Attribute-Assessments: Evidence for CS-US Associations in Attribute Conditioning.

    Science.gov (United States)

    Förderer, Sabine; Unkelbach, Christian

    2016-03-01

    Attribute Conditioning (AC) refers to people's changed assessments of stimuli's (CSs) attributes due to repeated pairing with stimuli (USs) possessing these attributes; for example, when an athletic person (US) is paired with a neutral person (CS), the neutral person is judged to be more athletic after the pairing. We hypothesize that this AC effect is due to CSs' associations with USs rather than direct associations with attributes. Three experiments test this hypothesis by changing US attributes after CS-US pairings. Experiments 1 and 2 conditioned athleticism by pairing neutral men (CSs) with athletic and non-athletic USs. Post-conditioning, USs' athleticism was reversed, which systematically influenced participants' assessment of CS athleticism. Experiment 3 conditioned athleticism and changed USs' musicality after CS-US pairings. This post-conditioning change affected musicality assessments of CSs but did not influence athleticism-assessments. The results indicate that AC effects are based on an associative CS-US-attribute structure. © 2016 by the Society for Personality and Social Psychology, Inc.

  1. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  2. Geometric regularizations and dual conifold transitions

    International Nuclear Information System (INIS)

    Landsteiner, Karl; Lazaroiu, Calin I.

    2003-01-01

    We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)

  3. Increased gluconeogenesis in rats exposed to hyper-G stress

    International Nuclear Information System (INIS)

    Daligcon, B.C.; Oyama, J.; Hannak, K.

    1985-01-01

    The role of gluconeogenesis on the increase in plasma glucose and liver glycogen of rats exposed to hyper-G (radial acceleration) stress was determined. Overnight-fasted, male Sprague-Dawley rats (250-300 g) were injected i.p. with uniformly labeled 14 C lactate, alanine, or glycerol (5 μCi/rat) and immediately exposed to 3.1 G for 0.25, 0.50, and 1.0 hr. 14 C incorporation of the labeled substrates into plasma glucose and liver glycogen was measured and compared to noncentrifuged control rats injected in a similar manner. Significant increases in 14 C incorporation of all three labeled substrates into plasma glucose were observed in centrifuged rats at all exposure periods; 14 C incorporation into liver glycogen was significantly increased only at 0.50 and 1.0 hr. The i.p. administration (5 mg/100-g body wt) of 5-methoxyindole-2-carboxylic acid, a potent gluconeogenesis inhibitor, prior to centrifugation blocked the increase in plasma glucose and liver glycogen during the first hour of centrifugation. The increase in plasma glucose and liver glycogen was also abolished in adrenodemedullated rats exposed to centrifugation for 1.0 hr. Propranolol, a beta-adrenergic blocker, suppressed the increase in plasma glucose of rats exposed to centrifugation for 0.25 hr. From the results of this study, it is concluded that the initial, rapid rise in plasma glucose as well as the increase in liver glycogen of rats exposed to hyper-G stress can be attributed to an increased rate of gluconeogenesis, and that epinephrine plays a dominant role during the early stages of exposure to centrifugation. 11 references, 3 tables

  4. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  5. Effect of a postnatal high-fat diet exposure on puberty onset, estrous cycle regularity, and kisspeptin expression in female rats

    DEFF Research Database (Denmark)

    Lie, Maria Elena Klibo; Overgaard, Agnete; Mikkelsen, Jens D

    2013-01-01

    Kisspeptin, encoded by Kiss1, plays a key role in pubertal maturation and reproduction as a positive upstream regulator of the hypothalamic-pituitary-gonadal (HPG) axis. To examine the role of high-fat diet (HFD) on puberty onset, estrous cycle regularity, and kisspeptin expression, female rats...... were exposed to HFD in distinct postnatal periods. Three groups of rats were exposed to HFD containing 60% energy from fat during the pre-weaning period (postnatal day (PND) 1-16, HFD PND 1-16), post-weaning period (HFD PND 21-34), or during both periods (HFD PND 1-34). Puberty onset, evaluated...... that postnatal HFD exposure induced irregular estrous cycles, but had no effect on puberty onset or kisspeptin....

  6. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  7. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  8. Tessellating the Sphere with Regular Polygons

    Science.gov (United States)

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  9. Accretion onto some well-known regular black holes

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2016-01-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  10. Accretion onto some well-known regular black holes

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)

    2016-03-15

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  11. Accretion onto some well-known regular black holes

    Science.gov (United States)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  12. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  13. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  14. Temporal regularity of the environment drives time perception

    OpenAIRE

    van Rijn, H; Rhodes, D; Di Luca, M

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...

  15. Attribute And-Or Grammar for Joint Parsing of Human Pose, Parts and Attributes.

    Science.gov (United States)

    Park, Seyoung; Nie, Xiaohan; Zhu, Song-Chun

    2017-07-25

    This paper presents an attribute and-or grammar (A-AOG) model for jointly inferring human body pose and human attributes in a parse graph with attributes augmented to nodes in the hierarchical representation. In contrast to other popular methods in the current literature that train separate classifiers for poses and individual attributes, our method explicitly represents the decomposition and articulation of body parts, and account for the correlations between poses and attributes. The A-AOG model is an amalgamation of three traditional grammar formulations: (i)Phrase structure grammar representing the hierarchical decomposition of the human body from whole to parts; (ii)Dependency grammar modeling the geometric articulation by a kinematic graph of the body pose; and (iii)Attribute grammar accounting for the compatibility relations between different parts in the hierarchy so that their appearances follow a consistent style. The parse graph outputs human detection, pose estimation, and attribute prediction simultaneously, which are intuitive and interpretable. We conduct experiments on two tasks on two datasets, and experimental results demonstrate the advantage of joint modeling in comparison with computing poses and attributes independently. Furthermore, our model obtains better performance over existing methods for both pose estimation and attribute prediction tasks.

  16. The uniqueness of the regularization procedure

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1981-01-01

    On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)

  17. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  18. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  19. 5 CFR 551.421 - Regular working hours.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...

  20. Regular extensions of some classes of grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular

  1. Potassium ion influx measurements on cultured Chinese hamster cells exposed to 60-hertz electromagnetic fields

    International Nuclear Information System (INIS)

    Stevenson, A.P.; Tobey, R.A.

    1985-01-01

    Potassium ion influx was measured by monitoring 42 KCl uptake by Chinese hamster ovary (CHO) cells grown in suspension culture and exposed in the culture medium to 60-Hz electromagnetic fields up to 2.85 V/m. In the presence of the field CHO cells exhibited two components of uptake, the same as previously observed for those grown under normal conditions; both these components of influx were decreased when compared to sham-exposed cells. Although decreases were consistently observed in exposed cells when plotted as loge of uptake, the differences between the means of the calculated fluxes of exposed and sham-exposed cells were quite small (on the order of 4-7%). When standard deviations were calculated, there was no significant difference between these means; however, when time-paired uptake data were analyzed, the differences were found to be statistically significant. Cells exposed only to the magnetic field exhibited similar small decreases in influx rates when compared to sham-exposed cells, suggesting that the reduction in K+ uptake could be attributed to the magnetic field. Additionally, intracellular K+ levels were measured over a prolonged exposure period (96 h), and no apparent differences in intracellular K+ levels were observed between field-exposed and sham-exposed cultures. These results indicate that high-strength electric fields have a small effect on the rate of transport of potassium ions but no effect on long-term maintenance of intracellular K+

  2. Regular non-twisting S-branes

    International Nuclear Information System (INIS)

    Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.

    2004-01-01

    We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)

  3. Near-Regular Structure Discovery Using Linear Programming

    KAUST Repository

    Huang, Qixing

    2014-06-02

    Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.

  4. Regular Expression Matching and Operational Semantics

    Directory of Open Access Journals (Sweden)

    Asiri Rathnayake

    2011-08-01

    Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.

  5. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  6. Regularization and error assignment to unfolded distributions

    CERN Document Server

    Zech, Gunter

    2011-01-01

    The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.

  7. Hospital morphine preparation for abstinence syndrome in newborns exposed to buprenorphine or methadone.

    Science.gov (United States)

    Colombini, Nathalie; Elias, Riad; Busuttil, Muriel; Dubuc, Myriam; Einaudi, Marie-Ange; Bues-Charbit, Martine

    2008-06-01

    This study was undertaken to evaluate the adequacy of a hospital formulated oral morphine preparation for management of neonatal abstinence syndrome (NAS) and to compare clinical features in infants exposed to methadone or buprenorphine in utero. Between October 1998 and October 2004 all infants born to mothers treated with buprenorphine or methadone during pregnancy were enrolled into this prospective study. Morphine hydrochloride solution (0.2 mg/ml) was prepared without preservatives under a flow laminar air box (class 100). Morphine solution: quantitative and qualitative HPLC analysis and microbiological study at regular intervals during storage at 4 degrees C for 6 months. Maternal characteristics: age, opiate dose during pregnancy. Neonatal characteristics: gestational age at delivery, birth weight, Lipsitz scores. Morphine dose: daily morphine dose, maximum morphine dose, duration of NAS, and duration of treatment required to achieve stable Lipsitz scores below 4. Kruskal-Wallis test for comparison of median values. Microbiological and HPLC analysis showed that the morphine preparation remained stable for 6 months at 4 degrees C. Nine methadone-exposed infants and 13 buprenorphine-exposed infants were included in the study. All infants presented NAS requiring treatment with the morphine solution. Lipsitz scores at birth were significantly different in the methadone and buprenorphine groups (P methadone group required significantly higher doses of morphine preparation than the buprenorphine group during the first 38 days of treatment (P methadone-exposed infants (range 6-24 h) and within 48 h after birth in buprenorphine-exposed infants (range 24-168 h). Due to the possibility of delayed onset of NAS up to 7 days, infants born to mothers treated with buprenorphine should be kept in the hospital for an appropriate surveillance period. Treatment time was significantly longer (45 vs. 28 days) and the mean morphine doses were higher (1.7 fold) in methadone-exposed

  8. Evaluation of chromosomal aberrations in radiologists and medical radiographers chronically exposed to ionising radiation

    International Nuclear Information System (INIS)

    Kasuba, V.; Rozgaj, R.; Jazbec, A.

    2005-01-01

    Chromosomal aberrations are fairly reliable indicators of damage induced by ionising radiation. This study included 180 radiologists and medical radiographers (technicians) and 90 controls who were not occupationally exposed to ionising radiation. All exposed subjects were routinely monitored with film badge, and none was exposed to a radiation dose exceeding the limit for occupational exposure recommended by the International Commission on Radiological Protection (ICRP). Two hundred metaphases for each person were scored. The frequencies of acentric fragments, dicentrics, ring chromosomes and chromosomal exchanges were determined and compared to those obtained in the control group. Chromosome aberrations were analysed using Poisson regression for profession, age, sex, smoking and years of exposure. Age, smoking, diagnostic exposure to X-rays and occupation were found to correlate with the occurrence of acentric fragments. The influence of exposure duration on the frequency of acentric fragments was greater in medical radiographers than in radiologists. Smoking and sex were found to correlate with the occurrence of dicentric chromosomes, which were more common in men than in women. As chromosome aberrations exceeded the expected level with respect to the absorbed dose, our findings confirm the importance of chromosome analysis as a part of regular medical check-up of subjects occupationally exposed to ionising radiation.(author)

  9. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  10. Regular physical activity has differential association with reduced obesity among diverse youth in the United States.

    Science.gov (United States)

    Fradkin, Chris; Wallander, Jan L; Elliott, Marc N; Cuccaro, Paula; Schuster, Mark A

    2016-08-01

    This study examined whether daily or almost daily lower-intensity physical activity was associated with reduced obesity, among 4824 African American, Hispanic, and White youth assessed in fifth and seventh grades. Regular lower-intensity physical activity was associated with reduced obesity only among Hispanic and White males and only in seventh grade, and not among youth in fifth grade, females, or African American males or females. Findings from this study suggest that the reduced obesity risk generally attributed to physical activity may not be consistent across racial/ethnic and gender groups of early adolescents. © The Author(s) 2014.

  11. Application of Turchin's method of statistical regularization

    Science.gov (United States)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  12. On the regularized fermionic projector of the vacuum

    Science.gov (United States)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  13. On the regularized fermionic projector of the vacuum

    International Nuclear Information System (INIS)

    Finster, Felix

    2008-01-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed

  14. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  15. Mellin-Barnes regularization, Borel summation and the Bender-Wu asymptotics for the anharmonic oscillator

    International Nuclear Information System (INIS)

    Kowalenko, V.; Rawlinson, A.A.

    1998-01-01

    We introduce the numerical technique of Mellin-Barnes regularization, which can be used to evaluate both convergent and divergent series. The technique is shown to be numerically equivalent to the corresponding results obtained by Borel summation. Both techniques are then applied to the Bender-Wu formula, which represents an asymptotic expansion for the energy levels of the anharmonic oscillator. We find that this formula is unable to give accurate values for the ground state energy, particularly when the coupling is greater than 0.1. As a consequence, the inability of the Bender-Wu formula to yield exact values for the energy level of the anharmonic oscillator cannot be attributed to its asymptotic nature. (authors)

  16. Spatially-Variant Tikhonov Regularization for Double-Difference Waveform Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Huang, Lianjie [Los Alamos National Laboratory; Zhang, Zhigang [Los Alamos National Laboratory

    2011-01-01

    Double-difference waveform inversion is a potential tool for quantitative monitoring for geologic carbon storage. It jointly inverts time-lapse seismic data for changes in reservoir geophysical properties. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using time-lapse seismic reflection data. Regularization techniques can be utilized to address the issue of ill-posedness. The regularization parameter controls the smoothness of inversion results. A constant regularization parameter is normally used in waveform inversion, and an optimal regularization parameter has to be selected. The resulting inversion results are a trade off among regions with different smoothness or noise levels; therefore the images are either over regularized in some regions while under regularized in the others. In this paper, we employ a spatially-variant parameter in the Tikhonov regularization scheme used in double-difference waveform tomography to improve the inversion accuracy and robustness. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter and those produced without any regularization. We observe that, utilizing a spatially-variant regularization scheme, the target regions are well reconstructed while the noise is reduced in the other regions. We show that the spatially-variant regularization scheme provides the flexibility to regularize local regions based on the a priori information without increasing computational costs and the computer memory requirement.

  17. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.

    Science.gov (United States)

    Wang, Shangping; Ye, Jian; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.

  18. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  19. From recreational to regular drug use

    DEFF Research Database (Denmark)

    Järvinen, Margaretha; Ravn, Signe

    2011-01-01

    This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...

  20. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  1. New regular black hole solutions

    International Nuclear Information System (INIS)

    Lemos, Jose P. S.; Zanchin, Vilson T.

    2011-01-01

    In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.

  2. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  3. On geodesics in low regularity

    Science.gov (United States)

    Sämann, Clemens; Steinbauer, Roland

    2018-02-01

    We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.

  4. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  5. Strictness Analysis for Attribute Grammars

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    1992-01-01

    interpretation of attribute grammars. The framework is used to construct a strictness analysis for attribute grammars. Results of the analysis enable us to transform an attribute grammar such that attributes are evaluated during parsing, if possible. The analysis is proved correct by relating it to a fixpoint...... semantics for attribute grammars. An implementation of the analysis is discussed and some extensions to the analysis are mentioned....

  6. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  7. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  8. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  9. The attribute measurement technique

    International Nuclear Information System (INIS)

    MacArthur, Duncan W.; Langner, Diana; Smith, Morag; Thron, Jonathan; Razinkov, Sergey; Livke, Alexander

    2010-01-01

    Any verification measurement performed on potentially classified nuclear material must satisfy two seemingly contradictory constraints. First and foremost, no classified information can be released. At the same time, the monitoring party must have confidence in the veracity of the measurement. An information barrier (IB) is included in the measurement system to protect the potentially classified information while allowing sufficient information transfer to occur for the monitoring party to gain confidence that the material being measured is consistent with the host's declarations, concerning that material. The attribute measurement technique incorporates an IB and addresses both concerns by measuring several attributes of the nuclear material and displaying unclassified results through green (indicating that the material does possess the specified attribute) and red (indicating that the material does not possess the specified attribute) lights. The attribute measurement technique has been implemented in the AVNG, an attribute measuring system described in other presentations at this conference. In this presentation, we will discuss four techniques used in the AVNG: (1) the 1B, (2) the attribute measurement technique, (3) the use of open and secure modes to increase confidence in the displayed results, and (4) the joint design as a method for addressing both host and monitor needs.

  10. Exclusion of children with intellectual disabilities from regular ...

    African Journals Online (AJOL)

    Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...

  11. Exploring Attribution Theory and Bias

    Science.gov (United States)

    Robinson, Jessica A.

    2017-01-01

    Courses: This activity can be used in a wide range of classes, including interpersonal communication, introduction to communication, and small group communication. Objectives: After completing this activity, students should be able to: (1) define attribution theory, personality attribution, situational attribution, and attribution bias; (2)…

  12. On infinite regular and chiral maps

    OpenAIRE

    Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán

    2015-01-01

    We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.

  13. 29 CFR 779.18 - Regular rate.

    Science.gov (United States)

    2010-07-01

    ... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  14. Attributing illness to food

    DEFF Research Database (Denmark)

    Batz, M. B.; Doyle, M. P.; Morris, J. G.

    2005-01-01

    source responsible for illness. A wide variety of food attribution approaches and data are used around the world including the analysis of outbreak data, case-control studies, microbial subtyping and source tracking methods, and expert judgment, among others. The Food Safety Research Consortium sponsored......Identification and prioritization of effective food safety interventions require an understanding of the relationship between food and pathogen from farm to consumption. Critical to this cause is food attribution, the capacity to attribute cases of foodborne disease to the food vehicle or other...... the Food Attribution Data Workshop in October 2003 to discuss the virtues and limitations of these approaches and to identify future options for collecting food attribution data in the United States. We summarize workshop discussions and identify challenges that affect progress in this critical component...

  15. Continuum regularized Yang-Mills theory

    International Nuclear Information System (INIS)

    Sadun, L.A.

    1987-01-01

    Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions

  16. Hearing impairment among workers exposed to excessive levels of noise in ginning industries

    Directory of Open Access Journals (Sweden)

    Kamalesh J Dube

    2011-01-01

    Full Text Available Cotton ginning workers have a risk of hearing loss due to excessive noise levels at the workplace environment. In this study, estimates of typical sound levels prevailing at the workplace environment and its effects on hearing ability of the exposed workers were made among cotton ginning workers. Data on self-reported health status was collected by a questionnaire survey at 10 cotton ginning industries located at Jalgaon district of Maharashtra state, India. The cotton ginning workers were exposed to continuous noise levels between 89 and 106 dBA. The hearing ability of the subjects was accessed by pure tone audiometry. The results of audiometry show mild, moderate and moderately severe degree of hearing impairment among the cotton ginning workers. The data generated during the study show that hearing loss was significantly associated with period of exposure to the workplace noise (P <0.0001. The prevalence of audiometric hearing impairment defined as a threshold average greater than 25 dB hearing level was 96% for binaural low-frequency average, 97% for binaural mid frequency average and 94% for binaural high-frequency average in the cotton ginning workers. We recommend the compulsory use of personal protective equipment like ear plug by the cotton ginning workers at the workplace environment. A regular maintenance of ginning and pressing machineries will avoid the emission of excessive noise at the workplace environment of cotton gins. A regular periodic medical examination is necessary to measure the impact of workplace noise on the health of cotton ginning workers.

  17. Smoking-Attributable Mortality, Morbidity, and Economic Costs (SAMMEC) - Smoking-Attributable Expenditures (SAE)

    Data.gov (United States)

    U.S. Department of Health & Human Services — 2005-2009. SAMMEC - Smoking-Attributable Mortality, Morbidity, and Economic Costs. Smoking-attributable expenditures (SAEs) are excess health care expenditures...

  18. Regularity effect in prospective memory during aging

    OpenAIRE

    Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique

    2016-01-01

    Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...

  19. 20 CFR 226.14 - Employee regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  20. Smoking-Attributable Mortality, Morbidity, and Economic Costs (SAMMEC) - Smoking-Attributable Mortality (SAM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — 2005-2009. SAMMEC - Smoking-Attributable Mortality, Morbidity, and Economic Costs. Smoking-attributable mortality (SAM) is the number of deaths caused by cigarette...

  1. Regular algebra and finite machines

    CERN Document Server

    Conway, John Horton

    2012-01-01

    World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg

  2. 39 CFR 6.1 - Regular meetings, annual meeting.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  3. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  4. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  5. Automating InDesign with Regular Expressions

    CERN Document Server

    Kahrel, Peter

    2006-01-01

    If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.

  6. Optimal behaviour can violate the principle of regularity.

    Science.gov (United States)

    Trimmer, Pete C

    2013-07-22

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.

  7. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  8. Matrix regularization of 4-manifolds

    OpenAIRE

    Trzetrzelewski, M.

    2012-01-01

    We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...

  9. Regular Breakfast and Blood Lead Levels among Preschool Children

    Directory of Open Access Journals (Sweden)

    Needleman Herbert

    2011-04-01

    Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.

  10. On the equivalence of different regularization methods

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1985-01-01

    The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)

  11. Accreting fluids onto regular black holes via Hamiltonian approach

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)

    2017-08-15

    We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)

  12. Biased Dropout and Crossmap Dropout: Learning towards effective Dropout regularization in convolutional neural network.

    Science.gov (United States)

    Poernomo, Alvin; Kang, Dae-Ki

    2018-08-01

    Training a deep neural network with a large number of parameters often leads to overfitting problem. Recently, Dropout has been introduced as a simple, yet effective regularization approach to combat overfitting in such models. Although Dropout has shown remarkable results on many deep neural network cases, its actual effect on CNN has not been thoroughly explored. Moreover, training a Dropout model will significantly increase the training time as it takes longer time to converge than a non-Dropout model with the same architecture. To deal with these issues, we address Biased Dropout and Crossmap Dropout, two novel approaches of Dropout extension based on the behavior of hidden units in CNN model. Biased Dropout divides the hidden units in a certain layer into two groups based on their magnitude and applies different Dropout rate to each group appropriately. Hidden units with higher activation value, which give more contributions to the network final performance, will be retained by a lower Dropout rate, while units with lower activation value will be exposed to a higher Dropout rate to compensate the previous part. The second approach is Crossmap Dropout, which is an extension of the regular Dropout in convolution layer. Each feature map in a convolution layer has a strong correlation between each other, particularly in every identical pixel location in each feature map. Crossmap Dropout tries to maintain this important correlation yet at the same time break the correlation between each adjacent pixel with respect to all feature maps by applying the same Dropout mask to all feature maps, so that all pixels or units in equivalent positions in each feature map will be either dropped or active during training. Our experiment with various benchmark datasets shows that our approaches provide better generalization than the regular Dropout. Moreover, our Biased Dropout takes faster time to converge during training phase, suggesting that assigning noise appropriately in

  13. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  14. Attribution style, theory and empirical findings

    OpenAIRE

    Krohn, Charlotte

    2017-01-01

    Master i læring i komplekse systemer Attribution theory is a long-standing and widely discussed theory that addresses individuals’ explanation of causes of events. People attribute events of success and failure individually. Previous studies indicate that performance in sporting events may be improved by changing individuals’ attribution style. Article one describes attribution and attribution theory as state of the art. The article addresses the most important findings within attribution ...

  15. MRI reconstruction with joint global regularization and transform learning.

    Science.gov (United States)

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. The health status of grandchildren of subjects occupationally exposed to chronic radiation. Communication 2. Morphofunctional parameters

    International Nuclear Information System (INIS)

    Petrushkina, N.P.; Musatkova, O.B.

    1996-01-01

    The study was aimed at investigation of the parameters of physical development and specific features in the development of psychomotor habits and peripheral blood parameters in children aged 0 to 7 grandchildren of exposed individuals. A dynamic follow-up of physical and psychomotor development, as well as regular check-ups of peripheral blood were carried out in 877 grandchildren of test subjects occupationally exposed to chronic radiation before conception. Multifactorial analysis did not show a correlation between the deviations in the physical development of children in the studied cohort and exposure of their grandparents and/or parents. Factors other than radiation (poor health status of mother, gestosis) did influence the studied parameters. The mean levels of hemoglobin, red cells, platelets, and leukocytes in the test group were virtually the same as in controls and coincided with published data [ru

  17. Strictly-regular number system and data structures

    DEFF Research Database (Denmark)

    Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki

    2010-01-01

    We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...

  18. Analysis of regularized Navier-Stokes equations, 2

    Science.gov (United States)

    Ou, Yuh-Roung; Sritharan, S. S.

    1989-01-01

    A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.

  19. Regularities, Natural Patterns and Laws of Nature

    Directory of Open Access Journals (Sweden)

    Stathis Psillos

    2014-02-01

    Full Text Available  The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology.  Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.

  20. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  1. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  2. Regularization of the Boundary-Saddle-Node Bifurcation

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2018-01-01

    Full Text Available In this paper we treat a particular class of planar Filippov systems which consist of two smooth systems that are separated by a discontinuity boundary. In such systems one vector field undergoes a saddle-node bifurcation while the other vector field is transversal to the boundary. The boundary-saddle-node (BSN bifurcation occurs at a critical value when the saddle-node point is located on the discontinuity boundary. We derive a local topological normal form for the BSN bifurcation and study its local dynamics by applying the classical Filippov’s convex method and a novel regularization approach. In fact, by the regularization approach a given Filippov system is approximated by a piecewise-smooth continuous system. Moreover, the regularization process produces a singular perturbation problem where the original discontinuous set becomes a center manifold. Thus, the regularization enables us to make use of the established theories for continuous systems and slow-fast systems to study the local behavior around the BSN bifurcation.

  3. Estimate of Venous Thromboembolism and Related-Deaths Attributable to the Use of Combined Oral Contraceptives in France

    OpenAIRE

    Tricotel, Aurore; Raguideau, Fanny; Collin, Cédric; Zureik, Mahmoud

    2014-01-01

    PURPOSE: To estimate the number of venous thromboembolic events and related-premature mortality (including immediate in-hospital lethality) attributable to the use of combined oral contraceptives in women aged 15 to 49 years-old between 2000 and 2011 in France. METHODS: French data on sales of combined oral contraceptives and on contraception behaviours from two national surveys conducted in 2000 and 2010 were combined to estimate the number of exposed women according to contraceptives genera...

  4. Low-Complexity Regularization Algorithms for Image Deblurring

    KAUST Repository

    Alanazi, Abdulrahman

    2016-11-01

    Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work

  5. Improvements in GRACE Gravity Fields Using Regularization

    Science.gov (United States)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  6. Cluster Based Vector Attribute Filtering

    NARCIS (Netherlands)

    Kiwanuka, Fred N.; Wilkinson, Michael H.F.

    2016-01-01

    Morphological attribute filters operate on images based on properties or attributes of connected components. Until recently, attribute filtering was based on a single global threshold on a scalar property to remove or retain objects. A single threshold struggles in case no single property or

  7. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  8. Regularities of intermediate adsorption complex relaxation

    International Nuclear Information System (INIS)

    Manukova, L.A.

    1982-01-01

    The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained

  9. Regularities in development of surface cracks in low-alloy steel under asymmetric cyclic bending

    International Nuclear Information System (INIS)

    Letunov, V.I.; Shul'ginov, B.S.; Plundrova, I.; Vajnshtok, V.A.; Kramarenko, I.V.

    1985-01-01

    Semielliptical cracks in low-alloy 09g2 and 12gn2mfayu steels are studied for regularities of their growth. It is shown that the growth rate of the semielliptical crack at the preset ΔK and R values is higher in the maximally depressed point of the front than in the point on the surface on the specimen under cyclic bending. A decrease of the 1/C parameter with growth of the semielliptical crack is experimentally established being attributed to the increase in difference of ΔK both in maximally depressed point of the crack front (phi=0) and in the point on the specimen surface (phi= π/2). Experiments have proved the correctness of the previously established formulas of stress-intensity factor calculation for semielliptical surface cracks under bending

  10. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  11. Estimation of health effects of long-term chronic exposure of the low level radiation among children exposed in consequence of the disaster at the Chernobyl nuclear power plant

    International Nuclear Information System (INIS)

    Bomko, E.I.; Romanneko, A.E.; Bomko, A.A.

    1997-01-01

    The low level dose effects have been studied for a long time within a framework of biological effects of radiation exposure. The estimation of the dose level of Ukrainian people who have been exposed in consequence of the Chernobyl accident allowed to consider that one of the critical populations which had been exposed to the low level radiation were children residing on the areas contaminated with radionuclides. The purpose of this work is - to reveal a regularity in morbidity and mortality of the critical populations having been exposed to long-term chronic exposure of the low level doses of radiation in consequences of the Chernobyl accident

  12. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  13. Belief attribution despite verbal interference.

    Science.gov (United States)

    Forgeot d'Arc, Baudouin; Ramus, Franck

    2011-05-01

    False-belief (FB) tasks have been widely used to study the ability of individuals to represent the content of their conspecifics' mental states (theory of mind). However, the cognitive processes involved are still poorly understood, and it remains particularly debated whether language and inner speech are necessary for the attribution of beliefs to other agents. We present a completely nonverbal paradigm consisting of silent animated cartoons in five closely related conditions, systematically teasing apart different aspects of scene analysis and allowing the assessment of the attribution of beliefs, goals, and physical causation. In order to test the role of language in belief attribution, we used verbal shadowing as a dual task to inhibit inner speech. Data on 58 healthy adults indicate that verbal interference decreases overall performance, but has no specific effect on belief attribution. Participants remained able to attribute beliefs despite heavy concurrent demands on their verbal abilities. Our results are most consistent with the hypothesis that belief attribution is independent from inner speech.

  14. 20 CFR 226.35 - Deductions from regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...

  15. Inventory of MRI applications and workers exposed to MRI-related electromagnetic fields in the Netherlands.

    Science.gov (United States)

    Schaap, Kristel; Christopher-De Vries, Yvette; Slottje, Pauline; Kromhout, Hans

    2013-12-01

    This study aims to characterise and quantify the population that is occupationally exposed to electromagnetic fields (EMF) from magnetic resonance imaging (MRI) devices and to identify factors that determine the probability and type of exposure. A questionnaire survey was used to collect information about scanners, procedures, historical developments and employees working with or near MRI scanners in clinical and research MRI departments in the Netherlands. Data were obtained from 145 MRI departments. A rapid increase in the use of MRI and field strength of the scanners was observed and quantified. The strongest magnets were employed by academic hospitals and research departments. Approximately 7000 individuals were reported to be working inside an MRI scanner room and were thus considered to have high probability of occupational exposure to static magnetic fields (SMF). Fifty-four per cent was exposed to SMF at least one day per month. The largest occupationally exposed group were radiographers (n ~ 1700). Nine per cent of the 7000 involved workers were regularly present inside a scanner room during image acquisition, when exposure to additional types of EMF is considered a possibility. This practice was most prevalent among workers involved in scanning animals. The data illustrate recent trends and historical developments in magnetic resonance imaging and provide an extensive characterisation of the occupationally exposed population. A considerable number of workers are potentially exposed to MRI-related EMF. Type and frequency of potential exposure depend on the job performed, as well as the type of workplace. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  16. Inventory of MRI applications and workers exposed to MRI-related electromagnetic fields in the Netherlands

    Energy Technology Data Exchange (ETDEWEB)

    Schaap, Kristel; Christopher-De Vries, Yvette; Slottje, Pauline; Kromhout, Hans, E-mail: h.kromhout@uu.nl

    2013-12-01

    Objective: This study aims to characterise and quantify the population that is occupationally exposed to electromagnetic fields (EMF) from magnetic resonance imaging (MRI) devices and to identify factors that determine the probability and type of exposure. Materials and methods: A questionnaire survey was used to collect information about scanners, procedures, historical developments and employees working with or near MRI scanners in clinical and research MRI departments in the Netherlands. Results: Data were obtained from 145 MRI departments. A rapid increase in the use of MRI and field strength of the scanners was observed and quantified. The strongest magnets were employed by academic hospitals and research departments. Approximately 7000 individuals were reported to be working inside an MRI scanner room and were thus considered to have high probability of occupational exposure to static magnetic fields (SMF). Fifty-four per cent was exposed to SMF at least one day per month. The largest occupationally exposed group were radiographers (n ∼ 1700). Nine per cent of the 7000 involved workers were regularly present inside a scanner room during image acquisition, when exposure to additional types of EMF is considered a possibility. This practice was most prevalent among workers involved in scanning animals. Conclusion: The data illustrate recent trends and historical developments in magnetic resonance imaging and provide an extensive characterisation of the occupationally exposed population. A considerable number of workers are potentially exposed to MRI-related EMF. Type and frequency of potential exposure depend on the job performed, as well as the type of workplace.

  17. Regularization theory for ill-posed problems selected topics

    CERN Document Server

    Lu, Shuai

    2013-01-01

    Thismonograph is a valuable contribution to thehighly topical and extremly productive field ofregularisationmethods for inverse and ill-posed problems. The author is an internationally outstanding and acceptedmathematicianin this field. In his book he offers a well-balanced mixtureof basic and innovative aspects.He demonstrates new,differentiatedviewpoints, and important examples for applications. The bookdemontrates thecurrent developments inthe field of regularization theory,such as multiparameter regularization and regularization in learning theory. The book is written for graduate and PhDs

  18. 20 CFR 226.34 - Divorced spouse regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...

  19. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    Science.gov (United States)

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  20. Fast and compact regular expression matching

    DEFF Research Database (Denmark)

    Bille, Philip; Farach-Colton, Martin

    2008-01-01

    We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....

  1. Dimensional regularization and analytical continuation at finite temperature

    International Nuclear Information System (INIS)

    Chen Xiangjun; Liu Lianshou

    1998-01-01

    The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given

  2. Regular and conformal regular cores for static and rotating solutions

    Energy Technology Data Exchange (ETDEWEB)

    Azreg-Aïnou, Mustapha

    2014-03-07

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  3. Regular and conformal regular cores for static and rotating solutions

    International Nuclear Information System (INIS)

    Azreg-Aïnou, Mustapha

    2014-01-01

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  4. Evolutionary Influences on Attribution and Affect

    Directory of Open Access Journals (Sweden)

    Jennie Brown

    2017-12-01

    Full Text Available Evolutionary theory was applied to Reeder and Brewer's schematic theory and Trafimow's affect theory to extend this area of research with five new predictions involving affect and ability attributions, comparing morality and ability attributions, gender differences, and reaction times for affect and attribution ratings. The design included a 2 (Trait Dimension Type: HR, PR × 2 (Behavior Type: morality, ability × 2 (Valence: positive, negative × 2 (Replication: original, replication × 2 (Sex: female or male actor × 2 (Gender: female or male participant × 2 (Order: attribution portion first, affect portion first mixed design. All factors were within participants except the order and participant gender. Participants were presented with 32 different scenarios in which an actor engaged in a concrete behavior after which they made attributions and rated their affect in response to the behavior. Reaction times were measured during attribution and affect ratings. In general, the findings from the experiment supported the new predictions. Affect was related to attributions for both morality and ability related behaviors. Morality related behaviors received more extreme attribution and affect ratings than ability related behaviors. Female actors received stronger attribution and affect ratings for diagnostic morality behaviors compared to male actors. Male and female actors received similar attribution and affect ratings for diagnostic ability behaviors. Diagnostic behaviors were associated with lower reaction times than non-diagnostic behaviors. These findings demonstrate the utility of evolutionary theory in creating new hypotheses and empirical findings in the domain of attribution.

  5. Physical growth and psychomotor development of infants exposed to antiepileptic drugs in utero.

    Science.gov (United States)

    Arulmozhi, T; Dhanaraj, M; Rangaraj, R; Vengatesan, A

    2006-03-01

    To evaluates the physical growth and psychomotor development of infants born to women with epilepsy on regular Anti Epileptic Drugs (AEDs). Govt. Stanley Medical College and Hospital, Tertiary care referral centre, Chennai. Open prospective cohort study with a control group. Consecutive women with epilepsy who were on regular anticonvulsants were followed up from their first trimester. Their babies were examined at birth and anthropometric measurements including anterior fontanelle size were noted. They were followed up till one year and periodically evaluated at 1st, 6th and 12th month of age. Development testing using Griffith scale was done at 2nd, 6th and 12th month. An equal number of control babies were also studied using the same scale for one year at the specified intervals. The results in both the groups were compared. 30 babies were enrolled in the case and control group. The AEDs received by the mothers with epilepsy were Phenytoin, Carbamazepine, and Sodium valproate. At birth and 1st month the weight, head circumference and length of case and control babies were equal. At 6th and 12th month reduction in the above 3 parameters were noted in the case babies ( P < 0.01). Area of anterior fontanelle (AF) was larger in the study group particularly in those exposed to phenytoin in utero (P < 0.001). In the case babies reduction in the sitting, prone and erect progression of the locomotor scores was observed at 2nd month (P < 0.001). Prone progression alone improved by 12th month and other two remained less than the control (P < 0.001). No difference was observed in reaching behaviour and personal/social scores in both groups. Infants exposed to Phenytoin monotherapy had a negative impact on sitting progression. Among infants exposed to AEDs in utero physical growth was equal to that of control at birth but reduced at 6th and 12th month probably due to extraneous factors. The Locomotor scores showed reduction in all areas in 2nd, 6th and 12th month except

  6. Low-rank matrix approximation with manifold regularization.

    Science.gov (United States)

    Zhang, Zhenyue; Zhao, Keke

    2013-07-01

    This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.

  7. Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions

    International Nuclear Information System (INIS)

    Lin, Hongxia; Du, Lili

    2013-01-01

    In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263–74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier–Stokes equations Indiana Univ. Math. J. 57 2643–61; 2011 Gobal regularity criterion for the 3D Navier–Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919–32) for 3D incompressible Navier–Stokes equations. (paper)

  8. Partial shading of Cabernet Sauvignon and Shiraz vines altered wine color and mouthfeel attributes, but increased exposure had little impact.

    Science.gov (United States)

    Joscelyne, Venetia L; Downey, Mark O; Mazza, Marica; Bastian, Susan E P

    2007-12-26

    Few studies have investigated the impact of vine shading on the sensory attributes of the resultant wine. This study examines the effects of canopy exposure levels on phenolic composition plus aroma, flavor, and mouthfeel aspects in wine. Wines were made from Cabernet Sauvignon and Shiraz grapes (Vitis vinifera L.) subjected to different levels of canopy exposure in a commercial vineyard in the Sunraysia region, Victoria, Australia. Canopy exposure treatments included control (standard vineyard practice), exposed (achieved with a foliage wire 600 mm above the top cordon), highly exposed (using a foliage wire with leaf plucking in the fruit zone), and shaded treatment (using 70% shade-cloth). Spectral and descriptive analyses showed that levels of anthocyanins, other phenolics, and perceived astringency were lower in wines made from shaded fruit; however, the reverse was generally not observed in wines of exposed and highly exposed fruit. Descriptive analysis also showed wines from the shaded fruit were different from other treatments for a number of flavor and aroma characters. These findings have implications for vineyard management practices.

  9. Regular-fat dairy and human health

    DEFF Research Database (Denmark)

    Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas

    2016-01-01

    In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....

  10. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded

  11. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    Science.gov (United States)

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  12. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  13. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  14. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  15. Three regularities of recognition memory: the role of bias.

    Science.gov (United States)

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  16. Customers' attributional judgments towards complaint handling in airline service: a confirmatory study based on attribution theory.

    Science.gov (United States)

    Chiou, Wen-Bin

    2007-06-01

    Besides flight safety, complaint handling plays a crucial role in airline service. Based upon Kelley's attribution theory, in the present study customers' attributions were examined under different conditions of complaint handling by the airlines. There were 531 passengers (216 women; ages 21 to 63 years, M = 41.5, SD = 11.1) with experiences of customer complaints who were recruited while awaiting boarding. Participants received one hypothetical scenario of three attributional conditions about complaint handling and then reported their attributional judgments. The findings indicated that the passengers were most likely to attribute the company's complaint handling to unconditional compliance when the airline company reacted to customer complaints under low distinctiveness, high consistency, and when consensus among the airlines was low. On the other hand, most passengers attributed the company's complaint handling to conditional compliance under the conditions in which distinctiveness, consistency, and consensus were all high. The results provide further insights into how different policies of complaint management affect customers' attributions. Future directions and managerial implications are also discussed.

  17. Method of transferring regular shaped vessel into cell

    International Nuclear Information System (INIS)

    Murai, Tsunehiko.

    1997-01-01

    The present invention concerns a method of transferring regular shaped vessels from a non-contaminated area to a contaminated cell. A passage hole for allowing the regular shaped vessels to pass in the longitudinal direction is formed to a partitioning wall at the bottom of the contaminated cell. A plurality of regular shaped vessel are stacked in multiple stages in a vertical direction from the non-contaminated area present below the passage hole, allowed to pass while being urged and transferred successively into the contaminated cell. As a result, since they are transferred while substantially closing the passage hole by the regular shaped vessels, radiation rays or contaminated materials are prevented from discharging from the contaminated cell to the non-contaminated area. Since there is no requirement to open/close an isolation door frequently, the workability upon transfer can be improved remarkably. In addition, the sealing member for sealing the gap between the regular shaped vessel passing through the passage hole and the partitioning wall of the bottom is disposed to the passage hole, the contaminated materials in the contaminated cells can be prevented from discharging from the gap to the non-contaminated area. (N.H.)

  18. Key attributes of expert NRL referees.

    Science.gov (United States)

    Morris, Gavin; O'Connor, Donna

    2017-05-01

    Experiential knowledge of elite National Rugby League (NRL) referees was investigated to determine the key attributes contributing to expert officiating performance. Fourteen current first-grade NRL referees were asked to identify the key attributes they believed contributed to their expert refereeing performance. The modified Delphi method involved a 3-round process of an initial semi-structured interview followed by 2 questionnaires to reach consensus of opinion. The data revealed 25 attributes that were rated as most important that underpin expert NRL refereeing performance. Results illustrate the significance of the cognitive category, with the top 6 ranked attributes all cognitive skills. Of these, the referees ranked decision-making accuracy as the most important attribute, followed by reading the game, communication, game understanding, game management and knowledge of the rules. Player rapport, positioning and teamwork were the top ranked game skill attributes underpinning performance excellence. Expert referees also highlighted a number of psychological attributes (e.g., concentration, composure and mental toughness) that were significant to performance. There were only 2 physiological attributes (fitness, aerobic endurance) that were identified as significant to elite officiating performance. In summary, expert consensus was attained which successfully provided a hierarchy of the most significant attributes of expert NRL refereeing performance.

  19. Automatic Constraint Detection for 2D Layout Regularization.

    Science.gov (United States)

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  20. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong

    2015-09-18

    In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.

  1. Lavrentiev regularization method for nonlinear ill-posed problems

    International Nuclear Information System (INIS)

    Kinh, Nguyen Van

    2002-10-01

    In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)

  2. Online Manifold Regularization by Dual Ascending Procedure

    Directory of Open Access Journals (Sweden)

    Boliang Sun

    2013-01-01

    Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.

  3. Degradation of creep properties in a long-term thermally exposed nickel base superalloy

    International Nuclear Information System (INIS)

    Zrnik, J.; Strunz, P.; Vrchovinsky, V.; Muransky, O.; Novy, Z.; Wiedenmann, A.

    2004-01-01

    When exposed for long time at elevated temperatures of 430 and 650 deg. C the nickel base superalloy EI 698 VD can experience a significant decrease in creep resistance. The cause of the creep degradation of nickel base superalloy is generally attributed to the microstructural instability at prolonged high temperature exposure. In this article, the creep-life data, generated on long thermally exposed nickel base superalloy EI698 VD were related to the local microstructural changes observed using SEM and TEM analysing techniques. While structure analysis provided supporting evidence concerning the changes associated with grain boundary carbide precipitation, no persuasive evidence of a morphological and/or dimensional gamma prime change was showed. For clarifying of the role of gamma prime precipitates on alloy on creep degradation, the SANS (small angle neutron scattering) experiment was crucial in the characterization of the bulk-averaged gamma prime morphology and its size distribution with respect to the period of thermal exposure

  4. Multiple Authorities Attribute-Based Verification Mechanism for Blockchain Mircogrid Transactions

    Directory of Open Access Journals (Sweden)

    Sarmadullah Khan

    2018-05-01

    Full Text Available Recently, advancements in energy distribution models have fulfilled the needs of microgrids in finding a suitable energy distribution model between producer and consumer without the need of central controlling authority. Most of the energy distribution model deals with energy transactions and losses without considering the security aspects such as information tampering. The transaction data could be accessible online to keep track of the energy distribution between the consumer and producer (e.g., online payment records and supplier profiles. However this data is prone to modification and misuse if a consumer moves from one producer to other. Blockchain is considered to be one solution to allow users to exchange energy related data and keep track of it without exposing it to modification. In this paper, electrical transactions embedded in blockchain are validated using the signatures of multiple producers based on their assigned attributes. These signatures are verified and endorsed by the consumers satisfying those attributes without revealing any information. The public and private keys for these consumers are generated by the producers and endorsement procedure using these keys ensures that these consumers are authorized. This approach does not need any central authority. To resist against collision attacks, producers are given a secret pseudorandom function seed. The comparative analysis shows the efficiency of proposed approach over the existing ones.

  5. Regular graph construction for semi-supervised learning

    International Nuclear Information System (INIS)

    Vega-Oliveros, Didier A; Berton, Lilian; Eberle, Andre Mantini; Lopes, Alneu de Andrade; Zhao, Liang

    2014-01-01

    Semi-supervised learning (SSL) stands out for using a small amount of labeled points for data clustering and classification. In this scenario graph-based methods allow the analysis of local and global characteristics of the available data by identifying classes or groups regardless data distribution and representing submanifold in Euclidean space. Most of methods used in literature for SSL classification do not worry about graph construction. However, regular graphs can obtain better classification accuracy compared to traditional methods such as k-nearest neighbor (kNN), since kNN benefits the generation of hubs and it is not appropriate for high-dimensionality data. Nevertheless, methods commonly used for generating regular graphs have high computational cost. We tackle this problem introducing an alternative method for generation of regular graphs with better runtime performance compared to methods usually find in the area. Our technique is based on the preferential selection of vertices according some topological measures, like closeness, generating at the end of the process a regular graph. Experiments using the global and local consistency method for label propagation show that our method provides better or equal classification rate in comparison with kNN

  6. ANALISIS PERSEPSI AUDITOR MENGENAI FAKTOR PENENTU AUDIT FEE BERDASARKAN CLIENT ATTRIBUTES, AUDITOR ATTRIBUTES, DAN ENGAGEMENT ATRTRIBUTES

    Directory of Open Access Journals (Sweden)

    Aman Faturachman

    2013-04-01

    Full Text Available This research purposes to know how perception auditors about determining factors of audit fee based on Client Attributes, Auditor Attributes, and Engagement Attributes at The Public Accountant Firm residing in Bandung. In this research, the indicator that is used to characterize the Client Attributes are size, complexity, inherent risk, profitability, leverage and liquidity, and industry. While the indicator to characterize the Auditor Attributes are auditor’s specialization, audit tenure, and location. And the indicators to characterize the Engagement Attributes are audit problems, audit report lag, busy season, and number of reports. The Method that is used in this research is a descriptive method. The population in this research is a public accountant in Bandung. Based on sampling techniques that saturated and qualified then it take about 11 offices of public accountant. SmartPLS ver 2.0 M3 are used as a Statistical analysis. The result of this research with count the loading factor and bootstrapping method are, the first one that the perception of the auditor based on client attributes of audit fee determinants from which is very important to not important is size, complexity, profitability, inherent risk,  industry, and leverage & liquidty, the second states that perception based on auditor attributes audit fee determinants from which is very important to not important is audit tenure, location, and specialization. And the third states that the perception of auditor engagement attributes based determinants of audit fee which is very important to not important audit report lag, busy season, audit problems and number of reports.

  7. Labeled experimental choice design for estimating attribute and availability cross effects with N attributes and specific brand attribute levels

    DEFF Research Database (Denmark)

    Nguyen, Thong Tien

    2011-01-01

    Experimental designs are required in widely used techniques in marketing research, especially for preference-based conjoint analysis and discrete-choice studies. Ideally, marketing researchers prefer orthogonal designs because this technique could give uncorrelated parameter estimates. However, o...... for implementing designs that is efficient enough to estimate model with N brands, each brand have K attributes, and brand attribute has specific levels. The paper also illustrates an example in food consumption study.......Experimental designs are required in widely used techniques in marketing research, especially for preference-based conjoint analysis and discrete-choice studies. Ideally, marketing researchers prefer orthogonal designs because this technique could give uncorrelated parameter estimates. However......, orthogonal design is not available for every situation. Instead, efficient design based on computerized design algorithm is always available. This paper presents the method of efficient design for estimating brand models having attribute and availability cross effects. The paper gives a framework...

  8. Effective or ineffective: attribute framing and the human papillomavirus (HPV) vaccine.

    Science.gov (United States)

    Bigman, Cabral A; Cappella, Joseph N; Hornik, Robert C

    2010-12-01

    To experimentally test whether presenting logically equivalent, but differently valenced effectiveness information (i.e. attribute framing) affects perceived effectiveness of the human papillomavirus (HPV) vaccine, vaccine-related intentions and policy opinions. A survey-based experiment (N=334) was fielded in August and September 2007 as part of a larger ongoing web-enabled monthly survey, the Annenberg National Health Communication Survey. Participants were randomly assigned to read a short passage about the HPV vaccine that framed vaccine effectiveness information in one of five ways. Afterward, they rated the vaccine and related opinion questions. Main statistical methods included ANOVA and t-tests. On average, respondents exposed to positive framing (70% effective) rated the HPV vaccine as more effective and were more supportive of vaccine mandate policy than those exposed to the negative frame (30% ineffective) or the control frame. Mixed valence frames showed some evidence for order effects; phrasing that ended by emphasizing vaccine ineffectiveness showed similar vaccine ratings to the negative frame. The experiment finds that logically equivalent information about vaccine effectiveness not only influences perceived effectiveness, but can in some cases influence support for policies mandating vaccine use. These framing effects should be considered when designing messages. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  9. Effective or ineffective: Attribute framing and the human papillomavirus (HPV) vaccine

    Science.gov (United States)

    Bigman, Cabral A.; Cappella, Joseph N.; Hornik, Robert C.

    2010-01-01

    Objectives To experimentally test whether presenting logically equivalent, but differently valenced effectiveness information (i.e. attribute framing) affects perceived effectiveness of the human papillomavirus (HPV) vaccine, vaccine related intentions and policy opinions. Method A survey-based experiment (N= 334) was fielded in August and September 2007 as part of a larger ongoing web-enabled monthly survey, the Annenberg National Health Communication Survey. Participants were randomly assigned to read a short passage about the HPV vaccine that framed vaccine effectiveness information in one of five ways. Afterward, they rated the vaccine and related opinion questions. Main statistical methods included ANOVA and t-tests. Results On average, respondents exposed to positive framing (70% effective) rated the HPV vaccine as more effective and were more supportive of vaccine mandate policy than those exposed to the negative frame (30% ineffective) or the control frame. Mixed valence frames showed some evidence for order effects; phrasing that ended by emphasizing vaccine ineffectiveness showed similar vaccine ratings to the negative frame. Conclusions The experiment finds that logically equivalent information about vaccine effectiveness not only influences perceived effectiveness, but can in some cases influence support for policies mandating vaccine use. Practice implications These framing effects should be considered when designing messages. PMID:20851560

  10. Semantic attributes based texture generation

    Science.gov (United States)

    Chi, Huifang; Gan, Yanhai; Qi, Lin; Dong, Junyu; Madessa, Amanuel Hirpa

    2018-04-01

    Semantic attributes are commonly used for texture description. They can be used to describe the information of a texture, such as patterns, textons, distributions, brightness, and so on. Generally speaking, semantic attributes are more concrete descriptors than perceptual features. Therefore, it is practical to generate texture images from semantic attributes. In this paper, we propose to generate high-quality texture images from semantic attributes. Over the last two decades, several works have been done on texture synthesis and generation. Most of them focusing on example-based texture synthesis and procedural texture generation. Semantic attributes based texture generation still deserves more devotion. Gan et al. proposed a useful joint model for perception driven texture generation. However, perceptual features are nonobjective spatial statistics used by humans to distinguish different textures in pre-attentive situations. To give more describing information about texture appearance, semantic attributes which are more in line with human description habits are desired. In this paper, we use sigmoid cross entropy loss in an auxiliary model to provide enough information for a generator. Consequently, the discriminator is released from the relatively intractable mission of figuring out the joint distribution of condition vectors and samples. To demonstrate the validity of our method, we compare our method to Gan et al.'s method on generating textures by designing experiments on PTD and DTD. All experimental results show that our model can generate textures from semantic attributes.

  11. Family caregivers' attributions about care-recipient behaviour: does caregiver relationship satisfaction mediate the attribution-distress relationship?

    Science.gov (United States)

    Hui, Siu-Kuen Azor; Elliott, Timothy R; Martin, Roy; Uswatte, Gitendra

    2011-09-01

    The relations of caregiver attributions about care-recipient's problem behaviour to caregiving relationship satisfaction and caregiver distress were examined. This is a cross sectional study. Seventy-five family caregivers of individuals diagnosed with various disabling health conditions were recruited and interviewed. Caregiver attributions (internality, intentionality, responsibility, and controllability), caregiving relationship satisfaction, and caregiver distress variables were measured. Structural equation techniques tested an a priori model of the latent constructs of caregiver attributions and caregiver relationship satisfaction to caregiver distress. Maladaptive caregiver attributions (i.e., more trait, higher intentionality, higher responsibility, and higher controllability) about care-recipients' problem behaviours predicted lower caregiving relationship satisfaction, which in turn was predictive of higher caregiver distress. Unexpectedly, caregiver attributions were not directly related to caregiver distress. However, attributions had an indirect effect on distress through relationship satisfaction. Younger caregivers experienced higher caregiver distress. Caregivers' explanations about care-recipient's problem behaviour are indicative of their satisfaction in the relationship with the care recipient, and poor caregiving relationship satisfaction is predictive of caregiver distress. Caregiver attributions and relationship quality may be considered in interventions with family caregivers. ©2010 The British Psychological Society.

  12. Physical model of dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Schonfeld, Jonathan F.

    2016-12-15

    We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)

  13. Privacy Protection on Multiple Sensitive Attributes

    Science.gov (United States)

    Li, Zhen; Ye, Xiaojun

    In recent years, a privacy model called k-anonymity has gained popularity in the microdata releasing. As the microdata may contain multiple sensitive attributes about an individual, the protection of multiple sensitive attributes has become an important problem. Different from the existing models of single sensitive attribute, extra associations among multiple sensitive attributes should be invested. Two kinds of disclosure scenarios may happen because of logical associations. The Q&S Diversity is checked to prevent the foregoing disclosure risks, with an α Requirement definition used to ensure the diversity requirement. At last, a two-step greedy generalization algorithm is used to carry out the multiple sensitive attributes processing which deal with quasi-identifiers and sensitive attributes respectively. We reduce the overall distortion by the measure of Masking SA.

  14. Information-theoretic semi-supervised metric learning via entropy regularization.

    Science.gov (United States)

    Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi

    2014-08-01

    We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.

  15. Fluctuations of quantum fields via zeta function regularization

    International Nuclear Information System (INIS)

    Cognola, Guido; Zerbini, Sergio; Elizalde, Emilio

    2002-01-01

    Explicit expressions for the expectation values and the variances of some observables, which are bilinear quantities in the quantum fields on a D-dimensional manifold, are derived making use of zeta function regularization. It is found that the variance, related to the second functional variation of the effective action, requires a further regularization and that the relative regularized variance turns out to be 2/N, where N is the number of the fields, thus being independent of the dimension D. Some illustrating examples are worked through. The issue of the stress tensor is also briefly addressed

  16. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  17. Attributable risk and potential impact of interventions to reduce household air pollution associated with under-five mortality in South Asia.

    Science.gov (United States)

    Naz, Sabrina; Page, Andrew; Agho, Kingsley Emwinyore

    2018-01-01

    Solid fuel use is the major source of household air pollution (HAP) and accounts for a substantial burden of morbidity and mortality in low and middle income countries. To evaluate and compare childhood mortality attributable to HAP in four South Asian countries. A series of Demographic and Health Survey (DHS) datasets for Bangladesh, India, Nepal and Pakistan were used for analysis. Estimates of relative risk and exposure prevalence relating to use of cooking fuel and under-five mortality were used to calculate population attributable fractions (PAFs) for each country. Potential impact fractions (PIFs) were also calculated assessing theoretical scenarios based on published interventions aiming to reduce exposure prevalence. There are an increased risk of under-five mortality in those exposed to cooking fuel compared to those not exposed in the four South Asian countries (OR = 1.30, 95% CI = 1.07-1.57, P  = 0.007). Combined PAF estimates for South Asia found that 66% (95% CI: 43.1-81.5%) of the 13,290 estimated cases of under-five mortality was attributable to HAP. Joint PIF estimates (assuming achievable reductions in HAP reported in intervention studies conducted in South Asia) indicates 47% of neonatal and 43% of under-five mortality cases associated with HAP could be avoidable in the four South Asian countries studied. Elimination of exposure to use of cooking fuel in the household targeting valuable intervention strategies (such as cooking in separate kitchen, improved cook stoves) could reduce substantially under-five mortality in South Asian countries.

  18. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    Science.gov (United States)

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  19. Regularity and chaos in cavity QED

    International Nuclear Information System (INIS)

    Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G

    2017-01-01

    The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)

  20. Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears

    Science.gov (United States)

    Chen, Sau-Chin; Hu, Jon-Fan

    2015-01-01

    Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…

  1. Attributional and relational processing in pigeons

    Directory of Open Access Journals (Sweden)

    Dennis eGarlick

    2011-02-01

    Full Text Available Six pigeons were trained using a matching-to-sample procedure where sample and rewarded comparisons matched on both attributional (color and relational (horizontal or vertical orientation dimensions. Probes then evaluated the pigeons’ preference to comparisons that varied in these dimensions. A strong preference was found for the attribute of color. The discrimination was not found to transfer to novel colors, however, suggesting that a general color rule had not been learned. Further, when color could not be used to guide responding, some influence of other attributional cues such as shape, but not relational cues, was found. We conclude that pigeons based their performance on attributional properties of but not on relational properties between elements in our matching-to-sample procedure.. Future studies should look at examining other attributes to compare attributional versus relational processing.

  2. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  3. Gluon attributes

    International Nuclear Information System (INIS)

    Weiler, T.

    1981-01-01

    An overview is presented of the attributes of gluons, deducible from experimental data. Particular attention is given to the photon-gluon fusion model of charm leptoproduction. The agreement with QCD and theoretical prejudice is qualitatively good

  4. Matrix regularization of embedded 4-manifolds

    International Nuclear Information System (INIS)

    Trzetrzelewski, Maciej

    2012-01-01

    We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).

  5. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    Science.gov (United States)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  6. Spanning Tree Based Attribute Clustering

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Jorge, Cordero Hernandez

    2009-01-01

    Attribute clustering has been previously employed to detect statistical dependence between subsets of variables. We propose a novel attribute clustering algorithm motivated by research of complex networks, called the Star Discovery algorithm. The algorithm partitions and indirectly discards...... inconsistent edges from a maximum spanning tree by starting appropriate initial modes, therefore generating stable clusters. It discovers sound clusters through simple graph operations and achieves significant computational savings. We compare the Star Discovery algorithm against earlier attribute clustering...

  7. Fractional Regularization Term for Variational Image Registration

    Directory of Open Access Journals (Sweden)

    Rafael Verdú-Monedero

    2009-01-01

    Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.

  8. An Attribute-Based Access Control with Efficient and Secure Attribute Revocation for Cloud Data Sharing Service

    Institute of Scientific and Technical Information of China (English)

    Nyamsuren Vaanchig; Wei Chen; Zhi-Guang Qin

    2017-01-01

    Nowadays, there is the tendency to outsource data to cloud storage servers for data sharing purposes. In fact, this makes access control for the outsourced data a challenging issue. Ciphertext-policy attribute-based encryption (CP-ABE) is a promising cryptographic solution for this challenge. It gives the data owner (DO) direct control on access policy and enforces the access policy cryptographically. However, the practical application of CP-ABE in the data sharing service also has its own inherent challenge with regard to attribute revocation. To address this challenge, we proposed an attribute-revocable CP-ABE scheme by taking advantages of the over-encryption mechanism and CP-ABE scheme and by considering the semi-trusted cloud service provider (CSP) that participates in decryption processes to issue decryption tokens for authorized users. We further presented the security and performance analysis in order to assess the effectiveness of the scheme. As compared with the existing attribute-revocable CP-ABE schemes, our attribute-revocable scheme is reasonably efficient and more secure to enable attribute-based access control over the outsourced data in the cloud data sharing service.

  9. Reducing errors in the GRACE gravity solutions using regularization

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  10. Likelihood ratio decisions in memory: three implied regularities.

    Science.gov (United States)

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  11. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    Science.gov (United States)

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  12. What causes breast cancer? A systematic review of causal attributions among breast cancer survivors and how these compare to expert-endorsed risk factors.

    Science.gov (United States)

    Dumalaon-Canaria, Jo Anne; Hutchinson, Amanda D; Prichard, Ivanka; Wilson, Carlene

    2014-07-01

    The aim of this paper was to review published research that analyzed causal attributions for breast cancer among women previously diagnosed with breast cancer. These attributions were compared with risk factors identified by published scientific evidence in order to determine the level of agreement between cancer survivors' attributions and expert opinion. A comprehensive search for articles, published between 1982 and 2012, reporting studies on causal attributions for breast cancer among patients and survivors was undertaken. Of 5,135 potentially relevant articles, 22 studies met the inclusion criteria. Two additional articles were sourced from reference lists of included studies. Results indicated a consistent belief among survivors that their own breast cancer could be attributed to family history, environmental factors, stress, fate, or chance. Lifestyle factors were less frequently identified, despite expert health information highlighting the importance of these factors in controlling and modifying cancer risk. This review demonstrated that misperceptions about the contribution of modifiable lifestyle factors to the risk of breast cancer have remained largely unchanged over the past 30 years. The findings of this review indicate that beliefs about the causes of breast cancer among affected women are not always consistent with the judgement of experts. Breast cancer survivors did not regularly identify causal factors supported by expert consensus such as age, physical inactivity, breast density, alcohol consumption, and reproductive history. Further research examining psychological predictors of attributions and the impact of cancer prevention messages on adjustment and well-being of cancer survivors is warranted.

  13. Online Manifold Regularization by Dual Ascending Procedure

    OpenAIRE

    Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui

    2013-01-01

    We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...

  14. Degree-regular triangulations of torus and Klein bottle

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.

  15. Attribution Theory and Crisis Intervention Therapy.

    Science.gov (United States)

    Skilbeck, William M.

    It was proposed that existing therapeutic procedures may influence attributions about emotional states. Therefore an attributional analysis of crisis intervention, a model of community-based, short-term consultation, was presented. This analysis suggested that crisis intervention provides attributionally-relevant information about both the source…

  16. Technology, attributions, and emotions in post-secondary education: An application of Weiner's attribution theory to academic computing problems.

    Science.gov (United States)

    Maymon, Rebecca; Hall, Nathan C; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia

    2018-01-01

    As technology becomes increasingly integrated with education, research on the relationships between students' computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner's (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study.

  17. Abstract Interpretation and Attribute Gramars

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    The objective of this thesis is to explore the connections between abstract interpretation and attribute grammars as frameworks in program analysis. Abstract interpretation is a semantics-based program analysis method. A large class of data flow analysis problems can be expressed as non-standard ...... is presented in the thesis. Methods from abstract interpretation can also be used in correctness proofs of attribute grammars. This proof technique introduces a new class of attribute grammars based on domain theory. This method is illustrated with examples....

  18. The relationship between lifestyle regularity and subjective sleep quality

    Science.gov (United States)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.

  19. Attribute Learning for SAR Image Classification

    Directory of Open Access Journals (Sweden)

    Chu He

    2017-04-01

    Full Text Available This paper presents a classification approach based on attribute learning for high spatial resolution Synthetic Aperture Radar (SAR images. To explore the representative and discriminative attributes of SAR images, first, an iterative unsupervised algorithm is designed to cluster in the low-level feature space, where the maximum edge response and the ratio of mean-to-variance are included; a cross-validation step is applied to prevent overfitting. Second, the most discriminative clustering centers are sorted out to construct an attribute dictionary. By resorting to the attribute dictionary, a representation vector describing certain categories in the SAR image can be generated, which in turn is used to perform the classifying task. The experiments conducted on TerraSAR-X images indicate that those learned attributes have strong visual semantics, which are characterized by bright and dark spots, stripes, or their combinations. The classification method based on these learned attributes achieves better results.

  20. Borderline personality disorder and regularly drinking alcohol before sex.

    Science.gov (United States)

    Thompson, Ronald G; Eaton, Nicholas R; Hu, Mei-Chen; Hasin, Deborah S

    2017-07-01

    Drinking alcohol before sex increases the likelihood of engaging in unprotected intercourse, having multiple sexual partners and becoming infected with sexually transmitted infections. Borderline personality disorder (BPD), a complex psychiatric disorder characterised by pervasive instability in emotional regulation, self-image, interpersonal relationships and impulse control, is associated with substance use disorders and sexual risk behaviours. However, no study has examined the relationship between BPD and drinking alcohol before sex in the USA. This study examined the association between BPD and regularly drinking before sex in a nationally representative adult sample. Participants were 17 491 sexually active drinkers from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions. Logistic regression models estimated effects of BPD diagnosis, specific borderline diagnostic criteria and BPD criterion count on the likelihood of regularly (mostly or always) drinking alcohol before sex, adjusted for controls. Borderline personality disorder diagnosis doubled the odds of regularly drinking before sex [adjusted odds ratio (AOR) = 2.26; confidence interval (CI) = 1.63, 3.14]. Of nine diagnostic criteria, impulsivity in areas that are self-damaging remained a significant predictor of regularly drinking before sex (AOR = 1.82; CI = 1.42, 2.35). The odds of regularly drinking before sex increased by 20% for each endorsed criterion (AOR = 1.20; CI = 1.14, 1.27) DISCUSSION AND CONCLUSIONS: This is the first study to examine the relationship between BPD and regularly drinking alcohol before sex in the USA. Substance misuse treatment should assess regularly drinking before sex, particularly among patients with BPD, and BPD treatment should assess risk at the intersection of impulsivity, sexual behaviour and substance use. [Thompson Jr RG, Eaton NR, Hu M-C, Hasin DS Borderline personality disorder and regularly drinking alcohol

  1. Generalized Bregman distances and convergence rates for non-convex regularization methods

    International Nuclear Information System (INIS)

    Grasmair, Markus

    2010-01-01

    We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ 1/p holds, if the regularization term has a slightly faster growth at zero than |t| p

  2. Eryptosis in lead-exposed workers

    International Nuclear Information System (INIS)

    Aguilar-Dorado, Itzel-Citlalli; Hernández, Gerardo; Quintanar-Escorza, Martha-Angelica; Maldonado-Vega, María; Rosas-Flores, Margarita; Calderón-Salinas, José-Víctor

    2014-01-01

    Eryptosis is a physiological phenomenon in which old and damaged erythrocytes are removed from circulation. Erythrocytes incubated with lead have exhibited major eryptosis. In the present work we found evidence of high levels of eryptosis in lead exposed workers possibly via oxidation. Blood samples were taken from 40 male workers exposed to lead (mean blood lead concentration 64.8 μg/dl) and non-exposed workers (4.2 μg/dl). The exposure to lead produced an intoxication characterized by 88.3% less δ-aminolevulinic acid dehydratase (δALAD) activity in lead exposed workers with respect to non-lead exposed workers. An increment of oxidation in lead exposed workers was characterized by 2.4 times higher thiobarbituric acid-reactive substance (TBARS) concentration and 32.8% lower reduced/oxidized glutathione (GSH/GSSG) ratio. Oxidative stress in erythrocytes of lead exposed workers is expressed in 192% higher free calcium concentration [Ca 2+ ] i and 1.6 times higher μ-calpain activity with respect to non-lead exposed workers. The adenosine triphosphate (ATP) concentration was not significantly different between the two worker groups. No externalization of phosphatidylserine (PS) was found in non-lead exposed workers (< 0.1%), but lead exposed workers showed 2.82% externalization. Lead intoxication induces eryptosis possibly through a molecular pathway that includes oxidation, depletion of reduced glutathione (GSH), increment of [Ca 2+ ], μ-calpain activation and externalization of PS in erythrocytes. Identifying molecular signals that induce eryptosis in lead intoxication is necessary to understand its physiopathology and chronic complications. - Graphical abstract: Fig. 1. (A) Blood lead concentration (PbB) and (B) phosphatidylserine externalization on erythrocyte membranes of non-lead exposed (□) and lead exposed workers (■). Values are mean ± SD. *Significantly different (P < 0.001). - Highlights: • Erythrocytes of lead exposed workers showed higher PS

  3. Eryptosis in lead-exposed workers

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar-Dorado, Itzel-Citlalli [Biochemistry Department, Centro de Investigación y Estudios Avanzados IPN, México, DF (Mexico); Hernández, Gerardo [Section of Methodology of Science, Centro de Investigación y Estudios Avanzados IPN, México, DF (Mexico); Quintanar-Escorza, Martha-Angelica [Faculty of Medicine, UJED, Durango, DGO (Mexico); Maldonado-Vega, María [CIATEC, León, GTO (Mexico); Rosas-Flores, Margarita [Biochemistry Department, Centro de Investigación y Estudios Avanzados IPN, México, DF (Mexico); Calderón-Salinas, José-Víctor, E-mail: jcalder@cinvestav.mx [Biochemistry Department, Centro de Investigación y Estudios Avanzados IPN, México, DF (Mexico)

    2014-12-01

    Eryptosis is a physiological phenomenon in which old and damaged erythrocytes are removed from circulation. Erythrocytes incubated with lead have exhibited major eryptosis. In the present work we found evidence of high levels of eryptosis in lead exposed workers possibly via oxidation. Blood samples were taken from 40 male workers exposed to lead (mean blood lead concentration 64.8 μg/dl) and non-exposed workers (4.2 μg/dl). The exposure to lead produced an intoxication characterized by 88.3% less δ-aminolevulinic acid dehydratase (δALAD) activity in lead exposed workers with respect to non-lead exposed workers. An increment of oxidation in lead exposed workers was characterized by 2.4 times higher thiobarbituric acid-reactive substance (TBARS) concentration and 32.8% lower reduced/oxidized glutathione (GSH/GSSG) ratio. Oxidative stress in erythrocytes of lead exposed workers is expressed in 192% higher free calcium concentration [Ca{sup 2+}]{sub i} and 1.6 times higher μ-calpain activity with respect to non-lead exposed workers. The adenosine triphosphate (ATP) concentration was not significantly different between the two worker groups. No externalization of phosphatidylserine (PS) was found in non-lead exposed workers (< 0.1%), but lead exposed workers showed 2.82% externalization. Lead intoxication induces eryptosis possibly through a molecular pathway that includes oxidation, depletion of reduced glutathione (GSH), increment of [Ca{sup 2+}], μ-calpain activation and externalization of PS in erythrocytes. Identifying molecular signals that induce eryptosis in lead intoxication is necessary to understand its physiopathology and chronic complications. - Graphical abstract: Fig. 1. (A) Blood lead concentration (PbB) and (B) phosphatidylserine externalization on erythrocyte membranes of non-lead exposed (□) and lead exposed workers (■). Values are mean ± SD. *Significantly different (P < 0.001). - Highlights: • Erythrocytes of lead exposed workers

  4. Increased frequency of micronucleated exfoliated cells among humans exposed in vivo to mobile telephone radiations

    International Nuclear Information System (INIS)

    Manoj Kumar Sharma; Abhay Singh Yadav

    2007-01-01

    Complete text of publication follows. The health concerns have been raised following the enormous increase in the use of wireless mobile telephones through out the world. This investigation had been taken, with the motive to find out whether mobile phone radiations cause any in vivo effects on the frequency of micronucleated exfoliated cells in the exposed subjects. A total of 109 subjects including 85 regular mobile phone users (exposed) and 24 non-users (controls) had participated in this study. Exfoliated cells were obtained by swabbing the buccal-mucosa from exposed as well as sex-age-matched controls. One thousand exfoliated cells were screened from each individual for nuclear anomalies including micronuclei (MN), karyolysis (KL), karyorrhexis (KH), broken egg (BE) and bi-nucleated (BN) cells. The average daily duration of exposure to mobile phone radiations is 61.26 minutes with an overall average duration of exposure in term of years is 2.35 years in exposed subjects along with the 9.84±0.745 MNC (micronucleated cells) and 10.72±0.889 TMN (total micronuclei) as compared to zero duration of exposure along with average 3.75±0.774 MNC and 4.00±0.808 TMN in controls. The means are significantly different in case MNC and TMN at 0.01% level of significance. For all other nuclear anomalies (KL, KH, BE and BN cells) the means are found statistically nonsignificant. A positive correlation was found in the frequency of MNC and TMN with respect to duration of exposure time.

  5. Breast ultrasound tomography with total-variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Laboratory; Li, Cuiping [KARMANOS CANCER INSTIT.; Duric, Neb [KARMANOS CANCER INSTIT

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  6. Manufacture of Regularly Shaped Sol-Gel Pellets

    Science.gov (United States)

    Leventis, Nicholas; Johnston, James C.; Kinder, James D.

    2006-01-01

    An extrusion batch process for manufacturing regularly shaped sol-gel pellets has been devised as an improved alternative to a spray process that yields irregularly shaped pellets. The aspect ratio of regularly shaped pellets can be controlled more easily, while regularly shaped pellets pack more efficiently. In the extrusion process, a wet gel is pushed out of a mold and chopped repetitively into short, cylindrical pieces as it emerges from the mold. The pieces are collected and can be either (1) dried at ambient pressure to xerogel, (2) solvent exchanged and dried under ambient pressure to ambigels, or (3) supercritically dried to aerogel. Advantageously, the extruded pellets can be dropped directly in a cross-linking bath, where they develop a conformal polymer coating around the skeletal framework of the wet gel via reaction with the cross linker. These pellets can be dried to mechanically robust X-Aerogel.

  7. Regularization and Complexity Control in Feed-forward Networks

    OpenAIRE

    Bishop, C. M.

    1995-01-01

    In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.

  8. Manifold regularization for sparse unmixing of hyperspectral images.

    Science.gov (United States)

    Liu, Junmin; Zhang, Chunxia; Zhang, Jiangshe; Li, Huirong; Gao, Yuelin

    2016-01-01

    Recently, sparse unmixing has been successfully applied to spectral mixture analysis of remotely sensed hyperspectral images. Based on the assumption that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance, unmixing of each mixed pixel in the scene is to find an optimal subset of signatures in a very large spectral library, which is cast into the framework of sparse regression. However, traditional sparse regression models, such as collaborative sparse regression , ignore the intrinsic geometric structure in the hyperspectral data. In this paper, we propose a novel model, called manifold regularized collaborative sparse regression , by introducing a manifold regularization to the collaborative sparse regression model. The manifold regularization utilizes a graph Laplacian to incorporate the locally geometrical structure of the hyperspectral data. An algorithm based on alternating direction method of multipliers has been developed for the manifold regularized collaborative sparse regression model. Experimental results on both the simulated and real hyperspectral data sets have demonstrated the effectiveness of our proposed model.

  9. Regularization dependence on phase diagram in Nambu–Jona-Lasinio model

    International Nuclear Information System (INIS)

    Kohyama, H.; Kimura, D.; Inagaki, T.

    2015-01-01

    We study the regularization dependence on meson properties and the phase diagram of quark matter by using the two flavor Nambu–Jona-Lasinio model. The model also has the parameter dependence in each regularization, so we explicitly give the model parameters for some sets of the input observables, then investigate its effect on the phase diagram. We find that the location or the existence of the critical end point highly depends on the regularization methods and the model parameters. Then we think that regularization and parameters are carefully considered when one investigates the QCD critical end point in the effective model studies

  10. Ca2+ promoted the low transformation efficiency of plasmid DNA exposed to PAH contaminants.

    Directory of Open Access Journals (Sweden)

    Fuxing Kang

    Full Text Available The effects of interactions between genetic materials and polycyclic aromatic hydrocarbons (PAHs on gene expression in the extracellular environment remain to be elucidated and little information is currently available on the effect of ionic strength on the transformation of plasmid DNA exposed to PAHs. Phenanthrene and pyrene were used as representative PAHs to evaluate the transformation of plasmid DNA after PAH exposure and to determine the role of Ca(2+ during the transformation. Plasmid DNA exposed to the test PAHs demonstrated low transformation efficiency. In the absence of PAHs, the transformation efficiency was 4.7 log units; however, the efficiency decreased to 3.72-3.14 log units with phenanthrene/pyrene exposures of 50 µg · L(-1. The addition of Ca(2+ enhanced the low transformation efficiency of DNA exposed to PAHs. Based on the co-sorption of Ca(2+ and phenanthrene/pyrene by DNA, we employed Fourier-transform infrared spectroscopy (FTIR, X-ray photoelectron spectroscopy (XPS, and mass spectrometry (MS to determine the mechanisms involved in PAH-induced DNA transformation. The observed low transformation efficiency of DNA exposed to either phenanthrene or pyrene can be attributed to a broken hydrogen bond in the double helix caused by planar PAHs. Added Ca(2+ formed strong electrovalent bonds with "-POO(--" groups in the DNA, weakening the interaction between PAHs and DNA based on weak molecular forces. This decreased the damage of PAHs to hydrogen bonds in double-stranded DNA by isolating DNA molecules from PAHs and consequently enhanced the transformation efficiency of DNA exposed to PAH contaminants. The findings provide insight into the effects of anthropogenic trace PAHs on DNA transfer in natural environments.

  11. Gender Attributions of Science and Academic Attributes: AN Examination of Undergraduate Science, Mathematics, and Technology Majors

    Science.gov (United States)

    Hughes, W. Jay

    Questionnaire data (n = 297) examined the relationship between gender attributions of science and academic attributes for undergraduate science, mathematics, and technology majors from the perspective of gender schema theory. Female and male respondents perceived that (a) the role of scientist was sex typed as masculine, (b) their majors were more valuable for members of their gender than for those of the opposite gender, (c) their majors were more valuable for themselves than for members of their gender in general. Androgynous attributions of scientists and the value of one's major for women predicted value for oneself, major confidence, and career confidence, and masculine attributions of scientists predicted class participation for female respondents. Feminine attributions of scientists predicted graduate school intent; value for women predicted major confidence and subjective achievement, and value for men predicted value for oneself, course confidence, and career confidence for male respondents.

  12. Habitat selection in a rocky landscape: experimentally decoupling the influence of retreat site attributes from that of landscape features.

    Directory of Open Access Journals (Sweden)

    Benjamin M Croak

    Full Text Available Organisms selecting retreat sites may evaluate not only the quality of the specific shelter, but also the proximity of that site to resources in the surrounding area. Distinguishing between habitat selection at these two spatial scales is complicated by co-variation among microhabitat factors (i.e., the attributes of individual retreat sites often correlate with their proximity to landscape features. Disentangling this co-variation may facilitate the restoration or conservation of threatened systems. To experimentally examine the role of landscape attributes in determining retreat-site quality for saxicolous ectotherms, we deployed 198 identical artificial rocks in open (sun-exposed sites on sandstone outcrops in southeastern Australia, and recorded faunal usage of those retreat sites over the next 29 months. Several landscape-scale attributes were associated with occupancy of experimental rocks, but different features were important for different species. For example, endangered broad-headed snakes (Hoplocephalus bungaroides preferred retreat sites close to cliff edges, flat rock spiders (Hemicloea major preferred small outcrops, and velvet geckos (Oedura lesueurii preferred rocks close to the cliff edge with higher-than-average sun exposure. Standardized retreat sites can provide robust experimental data on the effects of landscape-scale attributes on retreat site selection, revealing interspecific divergences among sympatric taxa that use similar habitats.

  13. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  14. Top-down attention affects sequential regularity representation in the human visual system.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-08-01

    Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.

  15. Regularized Discriminant Analysis: A Large Dimensional Study

    KAUST Repository

    Yang, Xiaoke

    2018-04-28

    In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).

  16. Adaptive Regularization of Neural Networks Using Conjugate Gradient

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Andersen et al. (1997) and Larsen et al. (1996, 1997) suggested a regularization scheme which iteratively adapts regularization parameters by minimizing validation error using simple gradient descent. In this contribution we present an improved algorithm based on the conjugate gradient technique........ Numerical experiments with feedforward neural networks successfully demonstrate improved generalization ability and lower computational cost...

  17. 20 CFR 226.33 - Spouse regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Spouse regular annuity rate. 226.33 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.33 Spouse regular annuity rate. The final tier I and tier II rates, from §§ 226.30 and 226.32, are...

  18. Paranormal belief and attributional style.

    Science.gov (United States)

    Dudley, R T; Whisnand, E A

    2000-06-01

    52 college students completed Tobacyk's 1988 Revised Paranormal Belief Scale and Peterson, Semmel, von Baeyer, Abramson, Metalsky, and Seligman's 1982 Attributional Style Questionnaire. Analysis showed significantly higher depressive attributional styles among high scorers on paranormal phenomena than low scorers.

  19. Female non-regular workers in Japan: their current status and health.

    Science.gov (United States)

    Inoue, Mariko; Nishikitani, Mariko; Tsurugano, Shinobu

    2016-12-07

    The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers' health; promotion of an occupational health program alone is insufficient.

  20. Female non-regular workers in Japan: their current status and health

    Science.gov (United States)

    INOUE, Mariko; NISHIKITANI, Mariko; TSURUGANO, Shinobu

    2016-01-01

    The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers’ health; promotion of an occupational health program alone is insufficient. PMID:27818453

  1. PET regularization by envelope guided conjugate gradients

    International Nuclear Information System (INIS)

    Kaufman, L.; Neumaier, A.

    1996-01-01

    The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations

  2. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.; Franek, M.; Schonlieb, C.-B.

    2012-01-01

    for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations

  3. Regularization of Nonmonotone Variational Inequalities

    International Nuclear Information System (INIS)

    Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.

    2006-01-01

    In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems

  4. Interval matrices: Regularity generates singularity

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří; Shary, S.P.

    2018-01-01

    Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016

  5. The hearing of rural workers exposed to noise and pesticides.

    Science.gov (United States)

    Sena, Tereza R R; Dourado, Solano S F; Lima, Lucas V; Antoniolli, Ângelo R

    2018-01-01

    n work environments, different physical and chemical agents that may pose a risk to workers' hearing health coexist. In this context, occupational hearing loss stands out. It has mostly been attributed to only noise exposure, although there are other agents, that is, pesticides that might contribute to occupational hearing loss. In this report, two cases will be presented that consider rural workers exposed to pesticides and intense noise generated by an adapted rudimentary vehicle. The noise measured in this vehicle was 88.3 dBA up to 93.4 dBA. Pure-tone audiometry, distortion product otoacoustic emissions, and high-frequency audiometry tests were performed. This report is unusual because of the short time of exposure to noise and pesticides and the hearing loss found, indicating a synergy between those agents.

  6. Maturation of social attribution skills in typically developing children: an investigation using the social attribution task.

    Science.gov (United States)

    Hu, Zhouyi; Chan, Raymond C K; McAlonan, Grainne M

    2010-02-03

    The assessment of social attribution skills in children can potentially identify and quantify developmental difficulties related to autism spectrum disorders and related conditions. However, relatively little is known about how these skills develop in typically developing children. Therefore the present study aimed to map the trajectory of social attribution skill acquisition in typically developing children from a young age. In the conventional social attribution task (SAT) participants ascribe feelings to moving shapes and describe their interaction in social terms. However, this format requires that participants understand both, that an inanimate shape is symbolic, and that its action is social in nature. This may be challenging for young children, and may be a potential confounder in studies of children with developmental disorders. Therefore we developed a modified SAT (mSAT) using animate figures (e.g. animals) to simplify the task. We used the SAT and mSAT to examine social attribution skill development in 154 healthy children (76 boys, 78 girls), ranging in age from 6 to 13 years and investigated the relationship between social attribution ability and executive function. The mSAT revealed a steady improvement in social attribution skills from the age of 6 years, and a significant advantage for girls compared to boys. In contrast, children under the age of 9 years performed at baseline on the conventional format and there were no gender differences apparent. Performance on neither task correlated with executive function after controlling for age and verbal IQ, suggesting that social attribution ability is independent of cognitive functioning. The present findings indicate that the mSAT is a sensitive measure of social attribution skills from a young age. This should be carefully considered when choosing assessments for young children and those with developmental disorders.

  7. Maturation of social attribution skills in typically developing children: an investigation using the social attribution task

    Directory of Open Access Journals (Sweden)

    Chan Raymond CK

    2010-02-01

    Full Text Available Abstract Background The assessment of social attribution skills in children can potentially identify and quantify developmental difficulties related to autism spectrum disorders and related conditions. However, relatively little is known about how these skills develop in typically developing children. Therefore the present study aimed to map the trajectory of social attribution skill acquisition in typically developing children from a young age. Methods In the conventional social attribution task (SAT participants ascribe feelings to moving shapes and describe their interaction in social terms. However, this format requires that participants understand both, that an inanimate shape is symbolic, and that its action is social in nature. This may be challenging for young children, and may be a potential confounder in studies of children with developmental disorders. Therefore we developed a modified SAT (mSAT using animate figures (e.g. animals to simplify the task. We used the SAT and mSAT to examine social attribution skill development in 154 healthy children (76 boys, 78 girls, ranging in age from 6 to 13 years and investigated the relationship between social attribution ability and executive function. Results The mSAT revealed a steady improvement in social attribution skills from the age of 6 years, and a significant advantage for girls compared to boys. In contrast, children under the age of 9 years performed at baseline on the conventional format and there were no gender differences apparent. Performance on neither task correlated with executive function after controlling for age and verbal IQ, suggesting that social attribution ability is independent of cognitive functioning. The present findings indicate that the mSAT is a sensitive measure of social attribution skills from a young age. This should be carefully considered when choosing assessments for young children and those with developmental disorders.

  8. Eryptosis in lead-exposed workers.

    Science.gov (United States)

    Aguilar-Dorado, Itzel-Citlalli; Hernández, Gerardo; Quintanar-Escorza, Martha-Angelica; Maldonado-Vega, María; Rosas-Flores, Margarita; Calderón-Salinas, José-Víctor

    2014-12-01

    Eryptosis is a physiological phenomenon in which old and damaged erythrocytes are removed from circulation. Erythrocytes incubated with lead have exhibited major eryptosis. In the present work we found evidence of high levels of eryptosis in lead exposed workers possibly via oxidation. Blood samples were taken from 40 male workers exposed to lead (mean blood lead concentration 64.8μg/dl) and non-exposed workers (4.2μg/dl). The exposure to lead produced an intoxication characterized by 88.3% less δ-aminolevulinic acid dehydratase (δALAD) activity in lead exposed workers with respect to non-lead exposed workers. An increment of oxidation in lead exposed workers was characterized by 2.4 times higher thiobarbituric acid-reactive substance (TBARS) concentration and 32.8% lower reduced/oxidized glutathione (GSH/GSSG) ratio. Oxidative stress in erythrocytes of lead exposed workers is expressed in 192% higher free calcium concentration [Ca(2+)]i and 1.6 times higher μ-calpain activity with respect to non-lead exposed workers. The adenosine triphosphate (ATP) concentration was not significantly different between the two worker groups. No externalization of phosphatidylserine (PS) was found in non-lead exposed workers (lead exposed workers showed 2.82% externalization. Lead intoxication induces eryptosis possibly through a molecular pathway that includes oxidation, depletion of reduced glutathione (GSH), increment of [Ca(2+)], μ-calpain activation and externalization of PS in erythrocytes. Identifying molecular signals that induce eryptosis in lead intoxication is necessary to understand its physiopathology and chronic complications. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Regular perturbations in a vector space with indefinite metric

    International Nuclear Information System (INIS)

    Chiang, C.C.

    1975-08-01

    The Klein space is discussed in connection with practical applications. Some lemmas are presented which are to be used for the discussion of regular self-adjoint operators. The criteria for the regularity of perturbed operators are given. (U.S.)

  10. Regular Generalized Star Star closed sets in Bitopological Spaces

    OpenAIRE

    K. Kannan; D. Narasimhan; K. Chandrasekhara Rao; R. Ravikumar

    2011-01-01

    The aim of this paper is to introduce the concepts of τ1τ2-regular generalized star star closed sets , τ1τ2-regular generalized star star open sets and study their basic properties in bitopological spaces.

  11. Solution path for manifold regularized semisupervised classification.

    Science.gov (United States)

    Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H

    2012-04-01

    Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.

  12. (2+1-dimensional regular black holes with nonlinear electrodynamics sources

    Directory of Open Access Journals (Sweden)

    Yun He

    2017-11-01

    Full Text Available On the basis of two requirements: the avoidance of the curvature singularity and the Maxwell theory as the weak field limit of the nonlinear electrodynamics, we find two restricted conditions on the metric function of (2+1-dimensional regular black hole in general relativity coupled with nonlinear electrodynamics sources. By the use of the two conditions, we obtain a general approach to construct (2+1-dimensional regular black holes. In this manner, we construct four (2+1-dimensional regular black holes as examples. We also study the thermodynamic properties of the regular black holes and verify the first law of black hole thermodynamics.

  13. Assessment of Different Remote Sensing Data for Forest Structural Attributes Estimation in the Hyrcanian forests

    Energy Technology Data Exchange (ETDEWEB)

    Nourian, N.; Shataee-Joibary, S.; Mohammadi, J.

    2016-07-01

    Aim of the study: The objective of the study was the comparative assessment of various spatial resolutions of optical satellite imagery including Landsat-TM, ASTER, and Quickbird data to estimate the forest structure attributes of Hyrcanian forests, Golestan province, northernIran. Material and methods: The 112 square plots with area of0.09 ha were measured using a random cluster sampling method and then stand volume, basal area, and tree stem density were computed using measured data. After geometric and atmospheric corrections of images, the spectral attributes from original and different synthetic bands were extracted for modelling. The statistical modelling was performed using CART algorithm. Performance assessment of models was examined using the unused validation plots by RMSE and bias measures. Main Results: The results showed that model of Quickbird data for stand volume, basal area, and tree stem density had a better performance compared to ASTER and TM data. However, estimations by ASTER and TM imagery had slightly similar results for all three parameters. Research highlights: This study exposed that the high-resolution satellite data are more useful for forest structure attributes estimation in the Hyrcanian broadleaves forests compared with medium resolution images without consideration of images costs. However, regarding to be free of the most medium resolution data such as ASTER and TM,ETM+ or OLI images, these data can be used with slightly similar results. (Author)

  14. Attributable risk of carpal tunnel syndrome according to industry and occupation in a general population.

    Science.gov (United States)

    Roquelaure, Yves; Ha, Catherine; Nicolas, Guillaume; Pélier-Cady, Marie-Christine; Mariot, Camille; Descatha, Alexis; Leclerc, Annette; Raimbeau, Guy; Goldberg, Marcel; Imbernon, Ellen

    2008-09-15

    An epidemiologic surveillance network for carpal tunnel syndrome (CTS) was set up in the general population of a French region to assess the proportion of CTS cases attributable to work in high-risk industries and occupations. Cases of CTS occurring among patients ages 20-59 years living in the Maine and Loire region were included prospectively from 2002 to 2004. Medical and occupation history was gathered by mailed questionnaire for 815 women and 320 men. Age-adjusted relative risks of CTS and the attributable risk fractions of CTS among exposed persons (AFEs) were computed in relation to industry sectors and occupation categories. Twenty-one industry sectors and 8 occupational categories for women and 10 sectors and 6 occupational categories for men were characterized by a significant excess risk of CTS. High AFE values were observed in the manufacturing (42-93% for both sexes), construction (66% for men), and personal service industries (66% for women) and in the trade and commerce sectors (49% for women). High AFE values were observed in lower-grade white-collar occupations for women (43-67%) and blue-collar occupations for men (60-74%) and women (48-88%). The attributable proportions of CTS cases among workers employed in industry sectors and occupation categories identified at high risk of CTS varied between 36% and 93%.

  15. Regularity of the Maxwell equations in heterogeneous media and Lipschitz domains

    KAUST Repository

    Bonito, Andrea

    2013-12-01

    This note establishes regularity estimates for the solution of the Maxwell equations in Lipschitz domains with non-smooth coefficients and minimal regularity assumptions. The argumentation relies on elliptic regularity estimates for the Poisson problem with non-smooth coefficients. © 2013 Elsevier Ltd.

  16. Sensory Attribute Identification Time Cannot Explain the Common Temporal Limit of Binding Different Attributes and Modalities

    Directory of Open Access Journals (Sweden)

    Waka Fujisaki

    2011-10-01

    Full Text Available An informative performance measure of the brain's integration across different sensory attributes/modalities is the critical temporal rate of feature alternation (between, eg, red and green beyond which observers could not identify the feature value specified by a timing signal from another attribute (eg, a pitch change. Interestingly, this limit, which we called the critical crowding frequency (CCF, is fairly low and nearly constant (∼2.5 Hz regardless of the combination of attributes and modalities (Fujisaki & Nishida, 2010, IMRF. One may consider that the CCF reflects the processing time required for the brain to identify the specified feature value on the fly. According to this idea, the similarity in CCF could be ascribed to the similarity in identification time for the attributes we used (luminance, color, orientation, pitch, vibration. To test this idea, we estimated the identification time of each attribute from [Go/ No-Go choice reaction time – simple reaction time]. In disagreement with the prediction, we found significant differences among attributes (eg, ∼160 ms for orientation, ∼70 ms for pitch. The results are more consistent with our proposal (Fujisaki & Nishida, Proc Roy Soc B that the CCF reflects the common rate limit of specifying what happens when (timing-content binding by a central, presumably postdictive, mechanism.

  17. Parental Health Attributions of Childhood Health and Illness: Development of the Pediatric Cultural Health Attributions Questionnaire (Pedi-CHAQ).

    Science.gov (United States)

    Vaughn, Lisa M; McLinden, Daniel J; Shellmer, Diana; Baker, Raymond C

    2011-01-01

    The causes attributed to childhood health and illness across cultures (cultural health attributions) are key factors that are now more frequently identified as affecting the health outcomes of children. Research suggests that the causes attributed to an event such as illness are thought to affect subsequent motivation, emotional response, decision making, and behavior. To date, there is no measure of health attributions appropriate for use with parents of pediatric patients. Using the Many-Facets approach to Rasch analysis, this study assesses the psychometrics of a newly developed instrument, the Pediatric Health Attributions Questionnaire (Pedi-CHAQ), a measure designed to assess the cultural health attributions of parents in diverse communities. Results suggest acceptable Rasch model statistics of fit and reliability for the Pedi-CHAQ. A shortened version of the questionnaire was developed as a result of this study and next steps are discussed.

  18. Regularized forecasting of chaotic dynamical systems

    International Nuclear Information System (INIS)

    Bollt, Erik M.

    2017-01-01

    While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.

  19. Graph theoretical ordering of structures as a basis for systematic searches for regularities in molecular data

    International Nuclear Information System (INIS)

    Randic, M.; Wilkins, C.L.

    1979-01-01

    Selected molecular data on alkanes have been reexamined in a search for general regularities in isomeric variations. In contrast to the prevailing approaches concerned with fitting data by searching for optimal parameterization, the present work is primarily aimed at established trends, i.e., searching for relative magnitudes and their regularities among the isomers. Such an approach is complementary to curve fitting or correlation seeking procedures. It is particularly useful when there are incomplete data which allow trends to be recognized but no quantitative correlation to be established. One proceeds by first ordering structures. One way is to consider molecular graphs and enumerate paths of different length as the basic graph invariant. It can be shown that, for several thermodynamic molecular properties, the number of paths of length two (p 2 ) and length three (p 3 ) are critical. Hence, an ordering based on p 2 and p 3 indicates possible trends and behavior for many molecular properties, some of which relate to others, some which do not. By considering a grid graph derived by attributing to each isomer coordinates (p 2 ,p 3 ) and connecting points along the coordinate axis, one obtains a simple presentation useful for isomer structural interrelations. This skeletal frame is one upon which possible trends for different molecular properties may be conveniently represented. The significance of the results and their conceptual value is discussed. 16 figures, 3 tables

  20. Forcing absoluteness and regularity properties

    NARCIS (Netherlands)

    Ikegami, D.

    2010-01-01

    For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.

  1. Arithmetic properties of $\\ell$-regular overpartition pairs

    OpenAIRE

    NAIKA, MEGADAHALLI SIDDA MAHADEVA; SHIVASHANKAR, CHANDRAPPA

    2017-01-01

    In this paper, we investigate the arithmetic properties of $\\ell$-regular overpartition pairs. Let $\\overline{B}_{\\ell}(n)$ denote the number of $\\ell$-regular overpartition pairs of $n$. We will prove the number of Ramanujan-like congruences and infinite families of congruences modulo 3, 8, 16, 36, 48, 96 for $\\overline{B}_3(n)$ and modulo 3, 16, 64, 96 for $\\overline{B}_4(n)$. For example, we find that for all nonnegative integers $\\alpha$ and $n$, $\\overline{B}_{3}(3^{\\alpha}(3n+2))\\equiv ...

  2. Chaos regularization of quantum tunneling rates

    International Nuclear Information System (INIS)

    Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward

    2011-01-01

    Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.

  3. Regularization Tools Version 3.0 for Matlab 5.2

    DEFF Research Database (Denmark)

    Hansen, Per Christian

    1999-01-01

    This communication describes Version 3.0 of Regularization Tools, a Matlab package for analysis and solution of discrete ill-posed problems.......This communication describes Version 3.0 of Regularization Tools, a Matlab package for analysis and solution of discrete ill-posed problems....

  4. Spiking Regularity and Coherence in Complex Hodgkin–Huxley Neuron Networks

    International Nuclear Information System (INIS)

    Zhi-Qiang, Sun; Ping, Xie; Wei, Li; Peng-Ye, Wang

    2010-01-01

    We study the effects of the strength of coupling between neurons on the spiking regularity and coherence in a complex network with randomly connected Hodgkin–Huxley neurons driven by colored noise. It is found that for the given topology realization and colored noise correlation time, there exists an optimal strength of coupling, at which the spiking regularity of the network reaches the best level. Moreover, when the temporal regularity reaches the best level, the spatial coherence of the system has already increased to a relatively high level. In addition, for the given number of neurons and noise correlation time, the values of average regularity and spatial coherence at the optimal strength of coupling are nearly independent of the topology realization. Furthermore, there exists an optimal value of colored noise correlation time at which the spiking regularity can reach its best level. These results may be helpful for understanding of the real neuron world. (cross-disciplinary physics and related areas of science and technology)

  5. Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications

    Science.gov (United States)

    Chaki, Sagar; Gurfinkel, Arie

    2010-01-01

    We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules

  6. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  7. Subcortical processing of speech regularities underlies reading and music aptitude in children

    Science.gov (United States)

    2011-01-01

    Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input

  8. Subcortical processing of speech regularities underlies reading and music aptitude in children.

    Science.gov (United States)

    Strait, Dana L; Hornickel, Jane; Kraus, Nina

    2011-10-17

    Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input. Definition of common biological underpinnings

  9. Subcortical processing of speech regularities underlies reading and music aptitude in children

    Directory of Open Access Journals (Sweden)

    Strait Dana L

    2011-10-01

    Full Text Available Abstract Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to

  10. Surface-based prostate registration with biomechanical regularization

    Science.gov (United States)

    van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.

    2013-03-01

    Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.

  11. Regularization in Matrix Relevance Learning

    NARCIS (Netherlands)

    Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael

    A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can

  12. Attributional Style and Depression in Multiple Sclerosis

    Science.gov (United States)

    Arnett, Peter A.

    2013-01-01

    Several etiologic theories have been proposed to explain depression in the general population. Studying these models and modifying them for use in the multiple sclerosis (MS) population may allow us to better understand depression in MS. According to the reformulated learned helplessness (LH) theory, individuals who attribute negative events to internal, stable, and global causes are more vulnerable to depression. This study differentiated attributional style that was or was not related to MS in 52 patients with MS to test the LH theory in this population and to determine possible differences between illness-related and non-illness-related attributions. Patients were administered measures of attributional style, daily stressors, disability, and depressive symptoms. Participants were more likely to list non-MS-related than MS-related causes of negative events on the Attributional Style Questionnaire (ASQ), and more-disabled participants listed significantly more MS-related causes than did less-disabled individuals. Non-MS-related attributional style correlated with stress and depressive symptoms, but MS-related attributional style did not correlate with disability or depressive symptoms. Stress mediated the effect of non-MS-related attributional style on depressive symptoms. These results suggest that, although attributional style appears to be an important construct in MS, it does not seem to be related directly to depressive symptoms; rather, it is related to more perceived stress, which in turn is related to increased depressive symptoms. PMID:24453767

  13. Left regular bands of groups of left quotients

    International Nuclear Information System (INIS)

    El-Qallali, A.

    1988-10-01

    A semigroup S which has a left regular band of groups as a semigroup of left quotients is shown to be the semigroup which is a left regular band of right reversible cancellative semigroups. An alternative characterization is provided by using spinned products. These results are applied to the case where S is a superabundant whose set of idempotents forms a left normal band. (author). 13 refs

  14. Sparsity regularization for parameter identification problems

    International Nuclear Information System (INIS)

    Jin, Bangti; Maass, Peter

    2012-01-01

    The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some

  15. Rotating Hayward’s regular black hole as particle accelerator

    International Nuclear Information System (INIS)

    Amir, Muhammed; Ghosh, Sushant G.

    2015-01-01

    Recently, Bañados, Silk and West (BSW) demonstrated that the extremal Kerr black hole can act as a particle accelerator with arbitrarily high center-of-mass energy (E CM ) when the collision takes place near the horizon. The rotating Hayward’s regular black hole, apart from Mass (M) and angular momentum (a), has a new parameter g (g>0 is a constant) that provides a deviation from the Kerr black hole. We demonstrate that for each g, with M=1, there exist critical a E and r H E , which corresponds to a regular extremal black hole with degenerate horizons, and a E decreases whereas r H E increases with increase in g. While aregular non-extremal black hole with outer and inner horizons. We apply the BSW process to the rotating Hayward’s regular black hole, for different g, and demonstrate numerically that the E CM diverges in the vicinity of the horizon for the extremal cases thereby suggesting that a rotating regular black hole can also act as a particle accelerator and thus in turn provide a suitable framework for Plank-scale physics. For a non-extremal case, there always exist a finite upper bound for the E CM , which increases with the deviation parameter g.

  16. Regularity of difference equations on Banach spaces

    CERN Document Server

    Agarwal, Ravi P; Lizama, Carlos

    2014-01-01

    This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.

  17. Attribute-Based Digital Signature System

    NARCIS (Netherlands)

    Ibraimi, L.; Asim, Muhammad; Petkovic, M.

    2011-01-01

    An attribute-based digital signature system comprises a signature generation unit (1) for signing a message (m) by generating a signature (s) based on a user secret key (SK) associated with a set of user attributes, wherein the signature generation unit (1) is arranged for combining the user secret

  18. Gamma regularization based reconstruction for low dose CT

    International Nuclear Information System (INIS)

    Zhang, Junfeng; Chen, Yang; Hu, Yining; Luo, Limin; Shu, Huazhong; Li, Bicao; Liu, Jin; Coatrieux, Jean-Louis

    2015-01-01

    Reducing the radiation in computerized tomography is today a major concern in radiology. Low dose computerized tomography (LDCT) offers a sound way to deal with this problem. However, more severe noise in the reconstructed CT images is observed under low dose scan protocols (e.g. lowered tube current or voltage values). In this paper we propose a Gamma regularization based algorithm for LDCT image reconstruction. This solution is flexible and provides a good balance between the regularizations based on l 0 -norm and l 1 -norm. We evaluate the proposed approach using the projection data from simulated phantoms and scanned Catphan phantoms. Qualitative and quantitative results show that the Gamma regularization based reconstruction can perform better in both edge-preserving and noise suppression when compared with other norms. (paper)

  19. Regularization of plurisubharmonic functions with a net of good points

    OpenAIRE

    Li, Long

    2017-01-01

    The purpose of this article is to present a new regularization technique of quasi-plurisubharmoinc functions on a compact Kaehler manifold. The idea is to regularize the function on local coordinate balls first, and then glue each piece together. Therefore, all the higher order terms in the complex Hessian of this regularization vanish at the center of each coordinate ball, and all the centers build a delta-net of the manifold eventually.

  20. Optimal Embeddings of Distance Regular Graphs into Euclidean Spaces

    NARCIS (Netherlands)

    F. Vallentin (Frank)

    2008-01-01

    htmlabstractIn this paper we give a lower bound for the least distortion embedding of a distance regular graph into Euclidean space. We use the lower bound for finding the least distortion for Hamming graphs, Johnson graphs, and all strongly regular graphs. Our technique involves semidefinite

  1. Technology, attributions, and emotions in post-secondary education: An application of Weiner’s attribution theory to academic computing problems

    Science.gov (United States)

    Hall, Nathan C.; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia

    2018-01-01

    As technology becomes increasingly integrated with education, research on the relationships between students’ computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner’s (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study. PMID:29529039

  2. 76 FR 3629 - Regular Meeting

    Science.gov (United States)

    2011-01-20

    ... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630

  3. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  4. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  5. Risk of cancer among in utero children exposed to A-bomb radiation, 1950-84

    International Nuclear Information System (INIS)

    Yoshimoto, Yasuhiko; Kato, Hiroo; Schull, W.J.

    1990-01-01

    This study examines the risk of cancer (incidence) over a period of 40 years among the inutero exposed survivors of the atomic bombing of Hiroshima and Nagasaki, and adds eight years of follow-up to a previous report which was confined to mortality. Only two cases of childhood cancer were observed among these survivors in the first 14 years of life; both had been heavily exposed. Subsequent cancers have all been of the adult type. Not only did the observed cancers occur earlier in the ≥ 0.30 Gy dose group than in the 0 Gy dose group but the incidence continues to increase and the crude cumulative incidence rate, 40 years after the A-bombing, is 3.9-fold greater in the ≥ 0.30 Gy group. In the observation period 1950-84, based on the absorbed dose to the mother's uterus, as estimated by the Dosimetry System 1986 (DS86), the relative risk of cancer at 1 Gy is 3.77 with a 95% confidence interval of 1.14-13.48. For the entire ≥ 0.01 Gy dose group the average excess risk per 10 4 person-year-gray is 6.57 (0.07-14.49) and the estimated attributable risk is 40.9% (2.9%-90.2%). These results, when viewed in the perspective of fetus doses, suggest that susceptibility to radiation-induced cancers is higher in pre- than in postnatally exposed survivors (at least those exposed as adults). However, definitive conclusions must await further follow-up studies. (author)

  6. Organizational Attributes, Market Growth, and Product Innovation

    NARCIS (Netherlands)

    Song, Michael; Chen, Yan

    2014-01-01

    Extensive research has shown that organizational attributes affect product innovation. Extending this literature, this article delimits two general categories of organizational attributes and relates them to product innovation. Organizational attributes can be either control oriented or flexibility

  7. High Rates of All-cause and Gastroenteritis-related Hospitalization Morbidity and Mortality among HIV-exposed Indian Infants

    Directory of Open Access Journals (Sweden)

    Tripathy Srikanth

    2011-07-01

    Full Text Available Abstract Background HIV-infected and HIV-exposed, uninfected infants experience a high burden of infectious morbidity and mortality. Hospitalization is an important metric for morbidity and is associated with high mortality, yet, little is known about rates and causes of hospitalization among these infants in the first 12 months of life. Methods Using data from a prevention of mother-to-child transmission (PMTCT trial (India SWEN, where HIV-exposed breastfed infants were given extended nevirapine, we measured 12-month infant all-cause and cause-specific hospitalization rates and hospitalization risk factors. Results Among 737 HIV-exposed Indian infants, 93 (13% were HIV-infected, 15 (16% were on HAART, and 260 (35% were hospitalized 381 times by 12 months of life. Fifty-six percent of the hospitalizations were attributed to infections; gastroenteritis was most common accounting for 31% of infectious hospitalizations. Gastrointestinal-related hospitalizations steadily increased over time, peaking around 9 months. The 12-month all-cause hospitalization, gastroenteritis-related hospitalization, and in-hospital mortality rates were 906/1000 PY, 229/1000 PY, and 35/1000 PY respectively among HIV-infected infants and 497/1000 PY, 107/1000 PY, and 3/1000 PY respectively among HIV-exposed, uninfected infants. Advanced maternal age, infant HIV infection, gestational age, and male sex were associated with higher all-cause hospitalization risk while shorter duration of breastfeeding and abrupt weaning were associated with gastroenteritis-related hospitalization. Conclusions HIV-exposed Indian infants experience high rates of all-cause and infectious hospitalization (particularly gastroenteritis and in-hospital mortality. HIV-infected infants are nearly 2-fold more likely to experience hospitalization and 10-fold more likely to die compared to HIV-exposed, uninfected infants. The combination of scaling up HIV PMTCT programs and implementing proven health

  8. Fundamental(ist) attribution error: Protestants are dispositionally focused.

    Science.gov (United States)

    Li, Yexin Jessica; Johnson, Kathryn A; Cohen, Adam B; Williams, Melissa J; Knowles, Eric D; Chen, Zhansheng

    2012-02-01

    Attribution theory has long enjoyed a prominent role in social psychological research, yet religious influences on attribution have not been well studied. We theorized and tested the hypothesis that Protestants would endorse internal attributions to a greater extent than would Catholics, because Protestantism focuses on the inward condition of the soul. In Study 1, Protestants made more internal, but not external, attributions than did Catholics. This effect survived controlling for Protestant work ethic, need for structure, and intrinsic and extrinsic religiosity. Study 2 showed that the Protestant-Catholic difference in internal attributions was significantly mediated by Protestants' greater belief in a soul. In Study 3, priming religion increased belief in a soul for Protestants but not for Catholics. Finally, Study 4 found that experimentally strengthening belief in a soul increased dispositional attributions among Protestants but did not change situational attributions. These studies expand the understanding of cultural differences in attributions by demonstrating a distinct effect of religion on dispositional attributions.

  9. The equivalence problem for LL- and LR-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus; Gecsec, F.

    It will be shown that the equivalence problem for LL-regular grammars is decidable. Apart from extending the known result for LL(k) grammar equivalence to LLregular grammar equivalence, we obtain an alternative proof of the decidability of LL(k) equivalence. The equivalence prob]em for LL-regular

  10. 'Regular' and 'emergency' repair

    International Nuclear Information System (INIS)

    Luchnik, N.V.

    1975-01-01

    Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)

  11. Processing SPARQL queries with regular expressions in RDF databases

    Science.gov (United States)

    2011-01-01

    Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225

  12. Processing SPARQL queries with regular expressions in RDF databases.

    Science.gov (United States)

    Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon

    2011-03-29

    As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  13. Genetic effects in children exposed in prenatal period to ionizing radiation after the Chornobyl nuclear power plant accident.

    Science.gov (United States)

    Stepanova, Ye I; Vdovenko, V Yu; Misharina, Zh A; Kolos, V I; Mischenko, L P

    2016-12-01

    To study the genetic effects in children exposed to radiation in utero as a result of the Chornobyl nuclear power plant accident accounting the total radiation doses and equivalent radiation doses to the red bone marrow. Incidence of minor developmental anomalies was studied in children exposed to radiation in utero (study group) and in the control group (1144 subjects surveyed in total). Cytogenetic tests using the method of differential G-banding of chromosomes were conducted in 60 children of both study and control groups (10-12-year-olds) and repeatedly in 39 adolescents (15-17-year-olds). A direct correlation was found between the number of minor developmental anomalies and fetal dose of radiation, and a reverse one with fetal gestational age at the time of radiation exposure. Incidence of chromosomal damage in somatic cells of 10-12-year-old children exposed prenatally was associated with radiation dose to the red bone marrow. The repeated testing has revealed that an increased level of chromosomal aberrations was preserved in a third of adolescents. The persons exposed to ionizing radiation at prenatal period should be attributed to the group of carcinogenic risk due to persisting increased levels of chromosome damage. This article is a part of a Special Issue entitled "The Chornobyl Nuclear Accident: Thirty Years After".

  14. Fluid queues and regular variation

    NARCIS (Netherlands)

    O.J. Boxma (Onno)

    1996-01-01

    textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail

  15. Bit-coded regular expression parsing

    DEFF Research Database (Denmark)

    Nielsen, Lasse; Henglein, Fritz

    2011-01-01

    the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...

  16. Rejuvenation of service exposed ammonia cracker tubes of cast Alloy 625 and their re-use

    Energy Technology Data Exchange (ETDEWEB)

    Singh, J.B., E-mail: jbsingh@barc.gov.in [Mechanical Metallurgy Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Verma, A. [Mechanical Metallurgy Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Jaiswal, D.M.; Kumar, N.; Patel, R.D. [Heavy Water Board, Department of Atomic Energy, Anushakti Nagar, Mumbai 400094 (India); Chakravartty, J.K. [Mechanical Metallurgy Division, Bhabha Atomic Research Centre, Mumbai 400085 (India)

    2015-09-17

    This study is an extension of a previous study undertaken to rejuvenate ammonia cracker tubes of Alloy 625 alloy that have been service exposed in heavy water plants for their full service life of 100,000 h. The service exposure caused significant microstructural modifications and deterioration in mechanical properties, and a solution annealing treatment of 2 h at 1160 °C rejuvenated all properties similar to those of the virgin alloy. The present study reports the evolution of microstructure and mechanical properties of a full service exposed centrifugally cast Alloy 625 tube that was put into service again for 55,000 h after receiving a rejuvenation treatment. During the second service, microstructural modifications, increase in strength and loss of ductility were on the lines of the work reported earlier. However, it was encouraging to observe that degraded properties after the second service life remained within the bounds of those of virgin and full service exposed tubes. The good performance of the rejuvenated tube during the second service life has been attributed to good control of operation parameters that limited the precipitation of grain boundary carbides during the first service life, which otherwise would have had a direct bearing on premature failure of tubes during their second service life.

  17. Closedness type regularity conditions in convex optimization and beyond

    Directory of Open Access Journals (Sweden)

    Sorin-Mihai Grad

    2016-09-01

    Full Text Available The closedness type regularity conditions have proven during the last decade to be viable alternatives to their more restrictive interiority type counterparts, in both convex optimization and different areas where it was successfully applied. In this review article we de- and reconstruct some closedness type regularity conditions formulated by means of epigraphs and subdifferentials, respectively, for general optimization problems in order to stress that they arise naturally when dealing with such problems. The results are then specialized for constrained and unconstrained convex optimization problems. We also hint towards other classes of optimization problems where closedness type regularity conditions were successfully employed and discuss other possible applications of them.

  18. Imbalance of positive and negative links induces regularity

    International Nuclear Information System (INIS)

    Kamal, Neeraj Kumar; Sinha, Sudeshna

    2011-01-01

    Research highlights: → We consider the behaviour of a random weighted network with chaotic neuronal dynamics at the nodes. → We investigate the effect of the balance of positive and negative links on dynamical regularity. → We find that when the connections are predominantly excitatory or inhibitory, one obtains a spatiotemporal fixed point. → However, when the links are balanced, the chaotic nature of the nodal dynamics of the uncoupled case is preserved. → Further we observe that larger network size leads to greater spatiotemporal regularity. - Abstract: We investigate the effect of the interplay of positive and negative links, on the dynamical regularity of a random weighted network, with neuronal dynamics at the nodes. We investigate how the mean J-bar and the variance of the weights of links, influence the spatiotemporal regularity of this dynamical network. We find that when the connections are predominantly positive (i.e. the links are mostly excitatory, with J-bar>0) the spatiotemporal fixed point is stable. A similar trend is observed when the connections are predominantly negative (i.e. the links are mostly inhibitory, with J-bar<0). However, when the positive and negative feedback is quite balanced (namely, when the mean of the connection weights is close to zero) one observes spatiotemporal chaos. That is, the balance of excitatory and inhibitory connections preserves the chaotic nature of the uncoupled case. To be brought to an inactive state one needs one type of connection (either excitatory or inhibitory) to dominate. Further we observe that larger network size leads to greater spatiotemporal regularity. We rationalize our observations through mean field analysis of the network dynamics.

  19. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  20. Poisson image reconstruction with Hessian Schatten-norm regularization.

    Science.gov (United States)

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  1. A projection-based approach to general-form Tikhonov regularization

    DEFF Research Database (Denmark)

    Kilmer, Misha E.; Hansen, Per Christian; Espanol, Malena I.

    2007-01-01

    We present a projection-based iterative algorithm for computing general-form Tikhonov regularized solutions to the problem minx| Ax-b |2^2+lambda2| Lx |2^2, where the regularization matrix L is not the identity. Our algorithm is designed for the common case where lambda is not known a priori...

  2. Sparse regularization for force identification using dictionaries

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  3. Regular black holes: electrically charged solutions, Reissner-Nordstroem outside a De Sitter core

    Energy Technology Data Exchange (ETDEWEB)

    Lemos, Jose P.S. [Universidade Tecnica de Lisboa (CENTRA/IST/UTL) (Portugal). Instituto Superior Tecnico. Centro Multidisciplinar de Astrofisica; Zanchin, Vilson T. [Universidade Federal do ABC (UFABC), Santo Andre, SP (Brazil). Centro de Ciencias Naturais e Humanas

    2011-07-01

    Full text: The understanding of the inside of a black hole is of crucial importance in order to have the correct picture of a black hole as a whole. The singularities that lurk inside of the usual black hole solutions are things to avoid. Their substitution by a regular part is of great interest, the process generating regular black holes. In the present work regular black hole solutions are found within general relativity coupled to Maxwell's electromagnetism and charged matter. We show that there are objects which correspond to regular charged black holes, whose interior region is de Sitter, whose exterior region is Reissner-Nordstroem, and the boundary between both regions is made of an electrically charged spherically symmetric coat. There are several solutions: the regular nonextremal black holes with a null matter boundary, the regular nonextremal black holes with a timelike matter boundary, the regular extremal black holes with a timelike matter boundary, and the regular overcharged stars with a timelike matter boundary. The main physical and geometrical properties of such charged regular solutions are analyzed. (author)

  4. Attributes and descriptors for building performance evaluation

    Directory of Open Access Journals (Sweden)

    S. Gopikrishnan

    2017-12-01

    In order to obtain the right feedback in levels of satisfaction with respect to these attributes, there is a need to have appropriate descriptors for incorporation in a survey instrument. This paper identifies attributes that indicate building performance and provides simple description of these attributes based on which items can be generated for a questionnaire. Such items can enable any user/occupant to easily understand the characteristics of these attributes and offer an objective feedback during questionnaire survey.

  5. Buildings exposed to fire

    International Nuclear Information System (INIS)

    1987-01-01

    The 24 lectures presented to the colloquium cover the following subject fields: (1) Behaviour of structural components exposed to fire; (2) Behaviour of building materials exposed to fire; (3) Thermal processes; (4) Safety related, theoretical studies. (PW) [de

  6. On the MSE Performance and Optimization of Regularized Problems

    KAUST Repository

    Alrashdi, Ayed

    2016-11-01

    The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.

  7. The neural substrates of impaired finger tapping regularity after stroke.

    Science.gov (United States)

    Calautti, Cinzia; Jones, P Simon; Guincestre, Jean-Yves; Naccarato, Marcello; Sharma, Nikhil; Day, Diana J; Carpenter, T Adrian; Warburton, Elizabeth A; Baron, Jean-Claude

    2010-03-01

    Not only finger tapping speed, but also tapping regularity can be impaired after stroke, contributing to reduced dexterity. The neural substrates of impaired tapping regularity after stroke are unknown. Previous work suggests damage to the dorsal premotor cortex (PMd) and prefrontal cortex (PFCx) affects externally-cued hand movement. We tested the hypothesis that these two areas are involved in impaired post-stroke tapping regularity. In 19 right-handed patients (15 men/4 women; age 45-80 years; purely subcortical in 16) partially to fully recovered from hemiparetic stroke, tri-axial accelerometric quantitative assessment of tapping regularity and BOLD fMRI were obtained during fixed-rate auditory-cued index-thumb tapping, in a single session 10-230 days after stroke. A strong random-effect correlation between tapping regularity index and fMRI signal was found in contralesional PMd such that the worse the regularity the stronger the activation. A significant correlation in the opposite direction was also present within contralesional PFCx. Both correlations were maintained if maximal index tapping speed, degree of paresis and time since stroke were added as potential confounds. Thus, the contralesional PMd and PFCx appear to be involved in the impaired ability of stroke patients to fingertap in pace with external cues. The findings for PMd are consistent with repetitive TMS investigations in stroke suggesting a role for this area in affected-hand movement timing. The inverse relationship with tapping regularity observed for the PFCx and the PMd suggests these two anatomically-connected areas negatively co-operate. These findings have implications for understanding the disruption and reorganization of the motor systems after stroke. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  8. Regularizations: different recipes for identical situations

    International Nuclear Information System (INIS)

    Gambin, E.; Lobo, C.O.; Battistel, O.A.

    2004-03-01

    We present a discussion where the choice of the regularization procedure and the routing for the internal lines momenta are put at the same level of arbitrariness in the analysis of Ward identities involving simple and well-known problems in QFT. They are the complex self-interacting scalar field and two simple models where the SVV and AVV process are pertinent. We show that, in all these problems, the conditions to symmetry relations preservation are put in terms of the same combination of divergent Feynman integrals, which are evaluated in the context of a very general calculational strategy, concerning the manipulations and calculations involving divergences. Within the adopted strategy, all the arbitrariness intrinsic to the problem are still maintained in the final results and, consequently, a perfect map can be obtained with the corresponding results of the traditional regularization techniques. We show that, when we require an universal interpretation for the arbitrariness involved, in order to get consistency with all stated physical constraints, a strong condition is imposed for regularizations which automatically eliminates the ambiguities associated to the routing of the internal lines momenta of loops. The conclusion is clean and sound: the association between ambiguities and unavoidable symmetry violations in Ward identities cannot be maintained if an unique recipe is required for identical situations in the evaluation of divergent physical amplitudes. (author)

  9. Regular transport dynamics produce chaotic travel times.

    Science.gov (United States)

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  10. The Impact of Computerization on Regular Employment (Japanese)

    OpenAIRE

    SUNADA Mitsuru; HIGUCHI Yoshio; ABE Masahiro

    2004-01-01

    This paper uses micro data from the Basic Survey of Japanese Business Structure and Activity to analyze the effects of companies' introduction of information and telecommunications technology on employment structures, especially regular versus non-regular employment. Firstly, examination of trends in the ratio of part-time workers recorded in the Basic Survey shows that part-time worker ratios in manufacturing firms are rising slightly, but that companies with a high proportion of part-timers...

  11. Lattice regularized chiral perturbation theory

    International Nuclear Information System (INIS)

    Borasoy, Bugra; Lewis, Randy; Ouimet, Pierre-Philippe A.

    2004-01-01

    Chiral perturbation theory can be defined and regularized on a spacetime lattice. A few motivations are discussed here, and an explicit lattice Lagrangian is reviewed. A particular aspect of the connection between lattice chiral perturbation theory and lattice QCD is explored through a study of the Wess-Zumino-Witten term

  12. Empirical laws, regularity and necessity

    NARCIS (Netherlands)

    Koningsveld, H.

    1973-01-01

    In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.

    1 am referring especially to two well-known views, viz. the regularity and

  13. Inclusion Professional Development Model and Regular Middle School Educators

    Science.gov (United States)

    Royster, Otelia; Reglin, Gary L.; Losike-Sedimo, Nonofo

    2014-01-01

    The purpose of this study was to determine the impact of a professional development model on regular education middle school teachers' knowledge of best practices for teaching inclusive classes and attitudes toward teaching these classes. There were 19 regular education teachers who taught the core subjects. Findings for Research Question 1…

  14. SOA: A Quality Attribute Perspective

    Science.gov (United States)

    2011-06-23

    in software engineering from CMU. 6June 2011 Twitter #seiwebinar © 2011 Carnegie Mellon University Agenda Service -Oriented Architecture and... Software Architecture: Review Service -Orientation and Quality Attributes Summary and Future Challenges 7June 2011 Twitter #seiwebinar © 2011...Architecture and Software Architecture: Review Service -Orientation and Quality Attributes Summary and Future Challenges Review 10June 2011 Twitter

  15. Regularization ambiguities in loop quantum gravity

    International Nuclear Information System (INIS)

    Perez, Alejandro

    2006-01-01

    One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find

  16. Processing SPARQL queries with regular expressions in RDF databases

    Directory of Open Access Journals (Sweden)

    Cho Hune

    2011-03-01

    Full Text Available Abstract Background As the Resource Description Framework (RDF data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf or Bio2RDF (bio2rdf.org, SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1 We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2 We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3 We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  17. Pairing renormalization and regularization within the local density approximation

    International Nuclear Information System (INIS)

    Borycki, P.J.; Dobaczewski, J.; Nazarewicz, W.; Stoitsov, M.V.

    2006-01-01

    We discuss methods used in mean-field theories to treat pairing correlations within the local density approximation. Pairing renormalization and regularization procedures are compared in spherical and deformed nuclei. Both prescriptions give fairly similar results, although the theoretical motivation, simplicity, and stability of the regularization procedure make it a method of choice for future applications

  18. Strong Bisimilarity and Regularity of Basic Parallel Processes is PSPACE-Hard

    DEFF Research Database (Denmark)

    Srba, Jirí

    2002-01-01

    We show that the problem of checking whether two processes definable in the syntax of Basic Parallel Processes (BPP) are strongly bisimilar is PSPACE-hard. We also demonstrate that there is a polynomial time reduction from the strong bisimilarity checking problem of regular BPP to the strong...... regularity (finiteness) checking of BPP. This implies that strong regularity of BPP is also PSPACE-hard....

  19. Contour Propagation With Riemannian Elasticity Regularization

    DEFF Research Database (Denmark)

    Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.

    2011-01-01

    Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...

  20. Review of epidemiological studies of human populations exposed to ionizing radiation

    International Nuclear Information System (INIS)

    Rao, B.S.

    2002-01-01

    Epidemiological studies undertaken in many radiation exposed cohorts have played an important role in the quantification of radiation risk. Follow up of nearly 100,000 A-bomb survivors by the Radiation Effects Research Foundation (RERF), constitutes the most comprehensive human epidemiological study. The study population covered both sexes, different age groups and dose ranges from a few mSv to 2-3 Sv. Among nearly 90,000 cohorts, as on 1990, 54% are alive. Among these, 35,000 are those exposed as children at the age<20 years. Nearly 20 % of the mortalities (8,040) were due to cancer. It was estimated from the analysis of these data that among the cancers observed in LSS cohorts, 425±45 cases (335 solid cancers+90 leukaemias) were attributable to radiation exposure. Assuming a value of two for DDREF, ICRP 60, 1991 estimated a cancer risk of 5% per Sv for low dose and low dose rate exposure conditions. There have been a number of efforts to study the human populations exposed to low level radiations. Epidemiological studies on nuclear workers from USA, UK and Canada constituting 95,673 workers spanning 2,124,526 person years was reported by Cardis et al. (1995). Total number of deaths were 15,825, of which 3,976 were cancer mortalities. The excess relative risk for all cancers excluding leukaemia is -0.07 per Sv (-0.4- +0.3) and for leukaemia (excluding CLL) is 2.18 (0.1-5.7). Epidemiological studies in high background radiation areas (HBRA) of Yangjiang, China and coastal Kerala showed no detectable increase in the incidence of cancers or of any genetic disorders. Epidemiological studies in human populations exposed to elevated background radiation for several generations did not show any increase in the genetic disorders. Recent information on the background incidence of monogenic disorders in human populations and the recoverability factor of induced genetic changes suggests a risk much lower than the earlier ICRP estimates. Many other epidemiological studies of

  1. Capped Lp approximations for the composite L0 regularization problem

    OpenAIRE

    Li, Qia; Zhang, Na

    2017-01-01

    The composite L0 function serves as a sparse regularizer in many applications. The algorithmic difficulty caused by the composite L0 regularization (the L0 norm composed with a linear mapping) is usually bypassed through approximating the L0 norm. We consider in this paper capped Lp approximations with $p>0$ for the composite L0 regularization problem. For each $p>0$, the capped Lp function converges to the L0 norm pointwisely as the approximation parameter tends to infinity. We point out tha...

  2. Fluid queues and regular variation

    NARCIS (Netherlands)

    Boxma, O.J.

    1996-01-01

    This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even

  3. Further investigation on "A multiplicative regularization for force reconstruction"

    Science.gov (United States)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  4. Regularization method for solving the inverse scattering problem

    International Nuclear Information System (INIS)

    Denisov, A.M.; Krylov, A.S.

    1985-01-01

    The inverse scattering problem for the Schroedinger radial equation consisting in determining the potential according to the scattering phase is considered. The problem of potential restoration according to the phase specified with fixed error in a finite range is solved by the regularization method based on minimization of the Tikhonov's smoothing functional. The regularization method is used for solving the problem of neutron-proton potential restoration according to the scattering phases. The determined potentials are given in the table

  5. Variational analysis of regular mappings theory and applications

    CERN Document Server

    Ioffe, Alexander D

    2017-01-01

    This monograph offers the first systematic account of (metric) regularity theory in variational analysis. It presents new developments alongside classical results and demonstrates the power of the theory through applications to various problems in analysis and optimization theory. The origins of metric regularity theory can be traced back to a series of fundamental ideas and results of nonlinear functional analysis and global analysis centered around problems of existence and stability of solutions of nonlinear equations. In variational analysis, regularity theory goes far beyond the classical setting and is also concerned with non-differentiable and multi-valued operators. The present volume explores all basic aspects of the theory, from the most general problems for mappings between metric spaces to those connected with fairly concrete and important classes of operators acting in Banach and finite dimensional spaces. Written by a leading expert in the field, the book covers new and powerful techniques, whic...

  6. Iterative regularization in intensity-modulated radiation therapy optimization

    International Nuclear Information System (INIS)

    Carlsson, Fredrik; Forsgren, Anders

    2006-01-01

    A common way to solve intensity-modulated radiation therapy (IMRT) optimization problems is to use a beamlet-based approach. The approach is usually employed in a three-step manner: first a beamlet-weight optimization problem is solved, then the fluence profiles are converted into step-and-shoot segments, and finally postoptimization of the segment weights is performed. A drawback of beamlet-based approaches is that beamlet-weight optimization problems are ill-conditioned and have to be regularized in order to produce smooth fluence profiles that are suitable for conversion. The purpose of this paper is twofold: first, to explain the suitability of solving beamlet-based IMRT problems by a BFGS quasi-Newton sequential quadratic programming method with diagonal initial Hessian estimate, and second, to empirically show that beamlet-weight optimization problems should be solved in relatively few iterations when using this optimization method. The explanation of the suitability is based on viewing the optimization method as an iterative regularization method. In iterative regularization, the optimization problem is solved approximately by iterating long enough to obtain a solution close to the optimal one, but terminating before too much noise occurs. Iterative regularization requires an optimization method that initially proceeds in smooth directions and makes rapid initial progress. Solving ten beamlet-based IMRT problems with dose-volume objectives and bounds on the beamlet-weights, we find that the considered optimization method fulfills the requirements for performing iterative regularization. After segment-weight optimization, the treatments obtained using 35 beamlet-weight iterations outperform the treatments obtained using 100 beamlet-weight iterations, both in terms of objective value and of target uniformity. We conclude that iterating too long may in fact deteriorate the quality of the deliverable plan

  7. Out-of-hospital cardiac arrest attributable to sunshine: a nationwide, retrospective, observational study.

    Science.gov (United States)

    Onozuka, Daisuke; Hagihara, Akihito

    2017-04-01

    To investigate the population attributable risk of out-of-hospital cardiac arrest (OHCA) from non-optimal sunshine duration and the relative contribution of daily sunshine hours. National registry data of all cases of OHCA occurred between 2005 and 2014 in the 47 Japanese prefectures were obtained. We examined the relationship between daily duration of sunshine and OHCA risk for each prefecture in Japan using a Poisson regression model combined with a distributed lag non-linear model, adjusting for confounding factors. The estimated associations for each prefecture were pooled at the nationwide level using a multivariate random-effects meta-analysis. A total of 658 742 cases of OHCA of presumed cardiac origin met our inclusion criteria. The minimum morbidity sunshine duration varied from the 21st percentile in Okayama to the 99th percentile in Hokkaido, Gifu, and Hyogo. Overall, 5.78% [95% empirical confidence interval (eCI): 3.57-7.16] of the OHCA cases were attributable to daily sunshine duration. The attributable fraction for short sunshine duration (below the minimum morbidity sunshine duration) was 4.18% (95% eCI: 2.64-5.38), whereas that for long sunshine duration (above the minimum morbidity sunshine duration) was 1.59% (95% eCI: 0.81-2.21). Daily sunshine duration was responsible for OHCA burden, and a greater number of OHCA cases occurred in patients who were only exposed to sunshine for short periods of time each day. Our findings suggest that public health efforts to reduce OHCA burden should take sunshine level into account. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For Permissions, please email: journals.permissions@oup.com.

  8. Correction of engineering servicing regularity of transporttechnological machines in operational process

    Science.gov (United States)

    Makarova, A. N.; Makarov, E. I.; Zakharov, N. S.

    2018-03-01

    In the article, the issue of correcting engineering servicing regularity on the basis of actual dependability data of cars in operation is considered. The purpose of the conducted research is to increase dependability of transport-technological machines by correcting engineering servicing regularity. The subject of the research is the mechanism of engineering servicing regularity influence on reliability measure. On the basis of the analysis of researches carried out before, a method of nonparametric estimation of car failure measure according to actual time-to-failure data was chosen. A possibility of describing the failure measure dependence on engineering servicing regularity by various mathematical models is considered. It is proven that the exponential model is the most appropriate for that purpose. The obtained results can be used as a separate method of engineering servicing regularity correction with certain operational conditions taken into account, as well as for the technical-economical and economical-stochastic methods improvement. Thus, on the basis of the conducted researches, a method of engineering servicing regularity correction of transport-technological machines in the operational process was developed. The use of that method will allow decreasing the number of failures.

  9. Analysis of regularized inversion of data corrupted by white Gaussian noise

    International Nuclear Information System (INIS)

    Kekkonen, Hanne; Lassas, Matti; Siltanen, Samuli

    2014-01-01

    Tikhonov regularization is studied in the case of linear pseudodifferential operator as the forward map and additive white Gaussian noise as the measurement error. The measurement model for an unknown function u(x) is m(x) = Au(x) + δ ε (x), where δ > 0 is the noise magnitude. If ε was an L 2 -function, Tikhonov regularization gives an estimate T α (m) = u∈H r arg min { ||Au-m|| L 2 2 + α||u|| H r 2 } for u where α = α(δ) is the regularization parameter. Here penalization of the Sobolev norm ||u|| H r covers the cases of standard Tikhonov regularization (r = 0) and first derivative penalty (r = 1). Realizations of white Gaussian noise are almost never in L 2 , but do belong to H s with probability one if s < 0 is small enough. A modification of Tikhonov regularization theory is presented, covering the case of white Gaussian measurement noise. Furthermore, the convergence of regularized reconstructions to the correct solution as δ → 0 is proven in appropriate function spaces using microlocal analysis. The convergence of the related finite-dimensional problems to the infinite-dimensional problem is also analysed. (paper)

  10. 47 CFR 76.614 - Cable television system regular monitoring.

    Science.gov (United States)

    2010-10-01

    ...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... in these bands of 20 uV/m or greater at a distance of 3 meters. During regular monitoring, any leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in the...

  11. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    Science.gov (United States)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  12. Expressiveness modulo Bisimilarity of Regular Expressions with Parallel Composition (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Jos C. M. Baeten

    2010-11-01

    Full Text Available The languages accepted by finite automata are precisely the languages denoted by regular expressions. In contrast, finite automata may exhibit behaviours that cannot be described by regular expressions up to bisimilarity. In this paper, we consider extensions of the theory of regular expressions with various forms of parallel composition and study the effect on expressiveness. First we prove that adding pure interleaving to the theory of regular expressions strictly increases its expressiveness up to bisimilarity. Then, we prove that replacing the operation for pure interleaving by ACP-style parallel composition gives a further increase in expressiveness. Finally, we prove that the theory of regular expressions with ACP-style parallel composition and encapsulation is expressive enough to express all finite automata up to bisimilarity. Our results extend the expressiveness results obtained by Bergstra, Bethke and Ponse for process algebras with (the binary variant of Kleene's star operation.

  13. Assessing brand image through communalities and asymmetries in brand-to-attribute and attribute-to-brand associations

    NARCIS (Netherlands)

    Torres, Anna; Bijmolt, Tarnmo H. A.

    2009-01-01

    Brand image is a key component of customer-based brand equity, and refers to the associations a consumer holds in memory. Such associations are often directional; one should distinguish between brand-to-attribute and attribute-to-brand associations, Information on these associations arise from two

  14. Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation

    International Nuclear Information System (INIS)

    Bardsley, Johnathan M; Goldes, John

    2009-01-01

    In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness

  15. Chord length distributions between hard disks and spheres in regular, semi-regular, and quasi-random structures

    International Nuclear Information System (INIS)

    Olson, Gordon L.

    2008-01-01

    In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution

  16. Chord length distributions between hard disks and spheres in regular, semi-regular, and quasi-random structures

    Energy Technology Data Exchange (ETDEWEB)

    Olson, Gordon L. [Computer and Computational Sciences Division (CCS-2), Los Alamos National Laboratory, 5 Foxglove Circle, Madison, WI 53717 (United States)], E-mail: olson99@tds.net

    2008-11-15

    In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution.

  17. Zero-One Law for Regular Languages and Semigroups with Zero

    OpenAIRE

    Sin'ya, Ryoma

    2015-01-01

    A regular language has the zero-one law if its asymptotic density converges to either zero or one. We prove that the class of all zero-one languages is closed under Boolean operations and quotients. Moreover, we prove that a regular language has the zero-one law if and only if its syntactic monoid has a zero element. Our proof gives both algebraic and automata characterisation of the zero-one law for regular languages, and it leads the following two corollaries: (i) There is an O(n log n) alg...

  18. Catalytic micromotor generating self-propelled regular motion through random fluctuation

    Science.gov (United States)

    Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa

    2013-07-01

    Most of the current studies on nano/microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.

  19. Globals of Completely Regular Monoids

    Institute of Scientific and Technical Information of China (English)

    Wu Qian-qian; Gan Ai-ping; Du Xian-kun

    2015-01-01

    An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.

  20. Source attribution of human campylobacteriosis at the point of exposure by combining comparative exposure assessment and subtype comparison based on comparative genomic fingerprinting.

    Directory of Open Access Journals (Sweden)

    André Ravel

    Full Text Available Human campylobacteriosis is a common zoonosis with a significant burden in many countries. Its prevention is difficult because humans can be exposed to Campylobacter through various exposures: foodborne, waterborne or by contact with animals. This study aimed at attributing campylobacteriosis to sources at the point of exposure. It combined comparative exposure assessment and microbial subtype comparison with subtypes defined by comparative genomic fingerprinting (CGF. It used isolates from clinical cases and from eight potential exposure sources (chicken, cattle and pig manure, retail chicken, beef, pork and turkey meat, and surface water collected within a single sentinel site of an integrated surveillance system for enteric pathogens in Canada. Overall, 1518 non-human isolates and 250 isolates from domestically-acquired human cases were subtyped and their subtype profiles analyzed for source attribution using two attribution models modified to include exposure. Exposure values were obtained from a concurrent comparative exposure assessment study undertaken in the same area. Based on CGF profiles, attribution was possible for 198 (79% human cases. Both models provide comparable figures: chicken meat was the most important source (65-69% of attributable cases whereas exposure to cattle (manure ranked second (14-19% of attributable cases, the other sources being minor (including beef meat. In comparison with other attributions conducted at the point of production, the study highlights the fact that Campylobacter transmission from cattle to humans is rarely meat borne, calling for a closer look at local transmission from cattle to prevent campylobacteriosis, in addition to increasing safety along the chicken supply chain.

  1. Source attribution of human campylobacteriosis at the point of exposure by combining comparative exposure assessment and subtype comparison based on comparative genomic fingerprinting.

    Science.gov (United States)

    Ravel, André; Hurst, Matt; Petrica, Nicoleta; David, Julie; Mutschall, Steven K; Pintar, Katarina; Taboada, Eduardo N; Pollari, Frank

    2017-01-01

    Human campylobacteriosis is a common zoonosis with a significant burden in many countries. Its prevention is difficult because humans can be exposed to Campylobacter through various exposures: foodborne, waterborne or by contact with animals. This study aimed at attributing campylobacteriosis to sources at the point of exposure. It combined comparative exposure assessment and microbial subtype comparison with subtypes defined by comparative genomic fingerprinting (CGF). It used isolates from clinical cases and from eight potential exposure sources (chicken, cattle and pig manure, retail chicken, beef, pork and turkey meat, and surface water) collected within a single sentinel site of an integrated surveillance system for enteric pathogens in Canada. Overall, 1518 non-human isolates and 250 isolates from domestically-acquired human cases were subtyped and their subtype profiles analyzed for source attribution using two attribution models modified to include exposure. Exposure values were obtained from a concurrent comparative exposure assessment study undertaken in the same area. Based on CGF profiles, attribution was possible for 198 (79%) human cases. Both models provide comparable figures: chicken meat was the most important source (65-69% of attributable cases) whereas exposure to cattle (manure) ranked second (14-19% of attributable cases), the other sources being minor (including beef meat). In comparison with other attributions conducted at the point of production, the study highlights the fact that Campylobacter transmission from cattle to humans is rarely meat borne, calling for a closer look at local transmission from cattle to prevent campylobacteriosis, in addition to increasing safety along the chicken supply chain.

  2. Discriminative power of visual attributes in dermatology.

    Science.gov (United States)

    Giotis, Ioannis; Visser, Margaretha; Jonkman, Marcel; Petkov, Nicolai

    2013-02-01

    Visual characteristics such as color and shape of skin lesions play an important role in the diagnostic process. In this contribution, we quantify the discriminative power of such attributes using an information theoretical approach. We estimate the probability of occurrence of each attribute as a function of the skin diseases. We use the distribution of this probability across the studied diseases and its entropy to define the discriminative power of the attribute. The discriminative power has a maximum value for attributes that occur (or do not occur) for only one disease and a minimum value for those which are equally likely to be observed among all diseases. Verrucous surface, red and brown colors, and the presence of more than 10 lesions are among the most informative attributes. A ranking of attributes is also carried out and used together with a naive Bayesian classifier, yielding results that confirm the soundness of the proposed method. proposed measure is proven to be a reliable way of assessing the discriminative power of dermatological attributes, and it also helps generate a condensed dermatological lexicon. Therefore, it can be of added value to the manual or computer-aided diagnostic process. © 2012 John Wiley & Sons A/S.

  3. Regularization of divergent integrals

    OpenAIRE

    Felder, Giovanni; Kazhdan, David

    2016-01-01

    We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.

  4. Thin accretion disk around regular black hole

    Directory of Open Access Journals (Sweden)

    QIU Tianqi

    2014-08-01

    Full Text Available The Penrose′s cosmic censorship conjecture says that naked singularities do not exist in nature.So,it seems reasonable to further conjecture that not even a singularity exists in nature.In this paper,a regular black hole without singularity is studied in detail,especially on its thin accretion disk,energy flux,radiation temperature and accretion efficiency.It is found that the interaction of regular black hole is stronger than that of the Schwarzschild black hole. Furthermore,the thin accretion will be more efficiency to lost energy while the mass of black hole decreased. These particular properties may be used to distinguish between black holes.

  5. Sex-dependent impact of early-life stress and adult immobilization in the attribution of incentive salience in rats.

    Science.gov (United States)

    Fuentes, Silvia; Carrasco, Javier; Hatto, Abigail; Navarro, Juan; Armario, Antonio; Monsonet, Manel; Ortiz, Jordi; Nadal, Roser

    2018-01-01

    Early life stress (ELS) induces long-term effects in later functioning and interacts with further exposure to other stressors in adulthood to shape our responsiveness to reward-related cues. The attribution of incentive salience to food-related cues may be modulated by previous and current exposures to stressors in a sex-dependent manner. We hypothesized from human data that exposure to a traumatic (severe) adult stressor will decrease the attribution of incentive salience to reward-associated cues, especially in females, because these effects are modulated by previous ELS. To study these factors in Long-Evans rats, we used as an ELS model of restriction of nesting material and concurrently evaluated maternal care. In adulthood, the offspring of both sexes were exposed to acute immobilization (IMO), and several days after, a Pavlovian conditioning procedure was used to assess the incentive salience of food-related cues. Some rats developed more attraction to the cue predictive of reward (sign-tracking) and others were attracted to the location of the reward itself, the food-magazine (goal-tracking). Several dopaminergic markers were evaluated by in situ hybridization. The results showed that ELS increased maternal care and decreased body weight gain (only in females). Regarding incentive salience, in absolute control animals, females presented slightly greater sign-tracking behavior than males. Non-ELS male rats exposed to IMO showed a bias towards goal-tracking, whereas in females, IMO produced a bias towards sign-tracking. Animals of both sexes not exposed to IMO displayed an intermediate phenotype. ELS in IMO-treated females was able to reduce sign-tracking and decrease tyrosine hydroxylase expression in the ventral tegmental area and dopamine D1 receptor expression in the accumbens shell. Although the predicted greater decrease in females in sign-tracking after IMO exposure was not corroborated by the data, the results highlight the idea that sex is an

  6. Sex-dependent impact of early-life stress and adult immobilization in the attribution of incentive salience in rats.

    Directory of Open Access Journals (Sweden)

    Silvia Fuentes

    Full Text Available Early life stress (ELS induces long-term effects in later functioning and interacts with further exposure to other stressors in adulthood to shape our responsiveness to reward-related cues. The attribution of incentive salience to food-related cues may be modulated by previous and current exposures to stressors in a sex-dependent manner. We hypothesized from human data that exposure to a traumatic (severe adult stressor will decrease the attribution of incentive salience to reward-associated cues, especially in females, because these effects are modulated by previous ELS. To study these factors in Long-Evans rats, we used as an ELS model of restriction of nesting material and concurrently evaluated maternal care. In adulthood, the offspring of both sexes were exposed to acute immobilization (IMO, and several days after, a Pavlovian conditioning procedure was used to assess the incentive salience of food-related cues. Some rats developed more attraction to the cue predictive of reward (sign-tracking and others were attracted to the location of the reward itself, the food-magazine (goal-tracking. Several dopaminergic markers were evaluated by in situ hybridization. The results showed that ELS increased maternal care and decreased body weight gain (only in females. Regarding incentive salience, in absolute control animals, females presented slightly greater sign-tracking behavior than males. Non-ELS male rats exposed to IMO showed a bias towards goal-tracking, whereas in females, IMO produced a bias towards sign-tracking. Animals of both sexes not exposed to IMO displayed an intermediate phenotype. ELS in IMO-treated females was able to reduce sign-tracking and decrease tyrosine hydroxylase expression in the ventral tegmental area and dopamine D1 receptor expression in the accumbens shell. Although the predicted greater decrease in females in sign-tracking after IMO exposure was not corroborated by the data, the results highlight the idea that sex

  7. The Attribute for Hydrocarbon Prediction Based on Attenuation

    International Nuclear Information System (INIS)

    Hermana, Maman; Harith, Z Z T; Sum, C W; Ghosh, D P

    2014-01-01

    Hydrocarbon prediction is a crucial issue in the oil and gas industry. Currently, the prediction of pore fluid and lithology are based on amplitude interpretation which has the potential to produce pitfalls in certain conditions of reservoir. Motivated by this fact, this work is directed to find out other attributes that can be used to reduce the pitfalls in the amplitude interpretation. Some seismic attributes were examined and studies showed that the attenuation attribute is a better attribute for hydrocarbon prediction. Theoretically, the attenuation mechanism of wave propagation is associated with the movement of fluid in the pore; hence the existence of hydrocarbon in the pore will be represented by attenuation attribute directly. In this paper we evaluated the feasibility of the quality factor ratio of P-wave and S-wave (Qp/Qs) as hydrocarbon indicator using well data and also we developed a new attribute based on attenuation for hydrocarbon prediction -- Normalized Energy Reduction Stack (NERS). To achieve these goals, this work was divided into 3 main parts; estimating the Qp/Qs on well log data, testing the new attribute in the synthetic data and applying the new attribute on real data in Malay Basin data. The result show that the Qp/Qs is better than Poisson's ratio and Lamda over Mu as hydrocarbon indicator. The curve, trend analysis and contrast of Qp/Qs is more powerful at distinguishing pore fluid than Poisson ratio and Lamda over Mu. The NERS attribute was successful in distinguishing the hydrocarbon from brine on synthetic data. Applying this attribute on real data on Malay basin, the NERS attribute is qualitatively conformable with the structure and location where the gas is predicted. The quantitative interpretation of this attribute for hydrocarbon prediction needs to be investigated further

  8. Hamilton-Jacobi theorems for regular reducible Hamiltonian systems on a cotangent bundle

    Science.gov (United States)

    Wang, Hong

    2017-09-01

    In this paper, some of formulations of Hamilton-Jacobi equations for Hamiltonian system and regular reduced Hamiltonian systems are given. At first, an important lemma is proved, and it is a modification for the corresponding result of Abraham and Marsden (1978), such that we can prove two types of geometric Hamilton-Jacobi theorem for a Hamiltonian system on the cotangent bundle of a configuration manifold, by using the symplectic form and dynamical vector field. Then these results are generalized to the regular reducible Hamiltonian system with symmetry and momentum map, by using the reduced symplectic form and the reduced dynamical vector field. The Hamilton-Jacobi theorems are proved and two types of Hamilton-Jacobi equations, for the regular point reduced Hamiltonian system and the regular orbit reduced Hamiltonian system, are obtained. As an application of the theoretical results, the regular point reducible Hamiltonian system on a Lie group is considered, and two types of Lie-Poisson Hamilton-Jacobi equation for the regular point reduced system are given. In particular, the Type I and Type II of Lie-Poisson Hamilton-Jacobi equations for the regular point reduced rigid body and heavy top systems are shown, respectively.

  9. Noncognitive Attributes in Physician Assistant Education.

    Science.gov (United States)

    Brenneman, Anthony E; Goldgar, Constance; Hills, Karen J; Snyder, Jennifer H; VanderMeulen, Stephane P; Lane, Steven

    2018-03-01

    Physician assistant (PA) admissions processes have typically given more weight to cognitive attributes than to noncognitive ones, both because a high level of cognitive ability is needed for a career in medicine and because cognitive factors are easier to measure. However, there is a growing consensus across the health professions that noncognitive attributes such as emotional intelligence, empathy, and professionalism are important for success in clinical practice and optimal care of patients. There is also some evidence that a move toward more holistic admissions practices, including evaluation of noncognitive attributes, can have a positive effect on diversity. The need for these noncognitive attributes in clinicians is being reinforced by changes in the US health care system, including shifting patient demographics and a growing emphasis on team-based care and patient satisfaction, and the need for clinicians to help patients interpret complex medical information. The 2016 Physician Assistant Education Association Stakeholder Summit revealed certain behavioral and affective qualities that employers of PAs value and sometimes find lacking in new graduates. Although there are still gaps in the evidence base, some tools and technologies currently exist to more accurately measure noncognitive variables. We propose some possible strategies and tools that PA programs can use to formalize the way they select for noncognitive attributes. Since PA programs have, on average, only 27 months to educate students, programs may need to focus more resources on selecting for these attributes than teaching them.

  10. Attributes of patient-centered primary care associated with the public perception of good healthcare quality in Brazil, Colombia, Mexico and El Salvador.

    Science.gov (United States)

    Doubova, Svetlana V; Guanais, Frederico C; Pérez-Cuevas, Ricardo; Canning, David; Macinko, James; Reich, Michael R

    2016-09-01

    This study evaluated primary care attributes of patient-centered care associated with the public perception of good quality in Brazil, Colombia, Mexico and El Salvador. We conducted a secondary data analysis of a Latin American survey on public perceptions and experiences with healthcare systems. The primary care attributes examined were access, coordination, provider-patient communication, provision of health-related information and emotional support. A double-weighted multiple Poisson regression with robust variance model was performed. The study included between 1500 and 1503 adults in each country. The results identified four significant gaps in the provision of primary care: not all respondents had a regular place of care or a regular primary care doctor (Brazil 35.7%, Colombia 28.4%, Mexico 22% and El Salvador 45.4%). The communication with the primary care clinic was difficult (Brazil 44.2%, Colombia 41.3%, Mexico 45.1% and El Salvador 56.7%). There was a lack of coordination of care (Brazil 78.4%, Colombia 52.3%, Mexico 48% and El Salvador 55.9%). Also, there was a lack of information about healthy diet (Brazil 21.7%, Colombia 32.9%, Mexico 16.9% and El Salvador 20.8%). The public's perception of good quality was variable (Brazil 67%, Colombia 71.1%, Mexico 79.6% and El Salvador 79.5%). The primary care attributes associated with the perception of good quality were a primary care provider 'who knows relevant information about a patient's medical history', 'solves most of the health problems', 'spends enough time with the patient', 'coordinates healthcare' and a 'primary care clinic that is easy to communicate with'. In conclusion, the public has a positive perception of the quality of primary care, although it has unfulfilled expectations; further efforts are necessary to improve the provision of patient-centered primary care services in these four Latin American countries. © The Author 2016. Published by Oxford University Press. All rights reserved. For

  11. State-level Medicaid expenditures attributable to smoking.

    Science.gov (United States)

    Armour, Brian S; Finkelstein, Eric A; Fiebelkorn, Ian C

    2009-07-01

    Medicaid recipients are disproportionately affected by tobacco-related disease because their smoking prevalence is approximately 53% greater than that of the overall US adult population. This study estimates state-level smoking-attributable Medicaid expenditures. We used state-level and national data and a 4-part econometric model to estimate the fraction of each state's Medicaid expenditures attributable to smoking. These fractions were multiplied by state-level Medicaid expenditure estimates obtained from the Centers for Medicare and Medicaid Services to estimate smoking-attributable expenditures. The smoking-attributable fraction for all states was 11.0% (95% confidence interval, 0.4%-17.0%). Medicaid smoking-attributable expenditures ranged from $40 million (Wyoming) to $3.3 billion (New York) in 2004 and totaled $22 billion nationwide. Cigarette smoking accounts for a sizeable share of annual state Medicaid expenditures. To reduce smoking prevalence among recipients and the growth rate in smoking-attributable Medicaid expenditures, state health departments and state health plans such as Medicaid are encouraged to provide free or low-cost access to smoking cessation counseling and medication.

  12. Attribution bias and social anxiety in schizophrenia

    Directory of Open Access Journals (Sweden)

    Amelie M. Achim

    2016-06-01

    Full Text Available Studies on attribution biases in schizophrenia have produced mixed results, whereas such biases have been more consistently reported in people with anxiety disorders. Anxiety comorbidities are frequent in schizophrenia, in particular social anxiety disorder, which could influence their patterns of attribution biases. The objective of the present study was thus to determine if individuals with schizophrenia and a comorbid social anxiety disorder (SZ+ show distinct attribution biases as compared with individuals with schizophrenia without social anxiety (SZ− and healthy controls. Attribution biases were assessed with the Internal, Personal, and Situational Attributions Questionnaire in 41 individual with schizophrenia and 41 healthy controls. Results revealed the lack of the normal externalizing bias in SZ+, whereas SZ− did not significantly differ from healthy controls on this dimension. The personalizing bias was not influenced by social anxiety but was in contrast linked with delusions, with a greater personalizing bias in individuals with current delusions. Future studies on attribution biases in schizophrenia should carefully document symptom presentation, including social anxiety.

  13. Circuit complexity of regular languages

    Czech Academy of Sciences Publication Activity Database

    Koucký, Michal

    2009-01-01

    Roč. 45, č. 4 (2009), s. 865-879 ISSN 1432-4350 R&D Projects: GA ČR GP201/07/P276; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : regular languages * circuit complexity * upper and lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.726, year: 2009

  14. Tur\\'an type inequalities for regular Coulomb wave functions

    OpenAIRE

    Baricz, Árpád

    2015-01-01

    Tur\\'an, Mitrinovi\\'c-Adamovi\\'c and Wilker type inequalities are deduced for regular Coulomb wave functions. The proofs are based on a Mittag-Leffler expansion for the regular Coulomb wave function, which may be of independent interest. Moreover, some complete monotonicity results concerning the Coulomb zeta functions and some interlacing properties of the zeros of Coulomb wave functions are given.

  15. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman

    2017-11-02

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  16. Centered Differential Waveform Inversion with Minimum Support Regularization

    KAUST Repository

    Kazei, Vladimir

    2017-05-26

    Time-lapse full-waveform inversion has two major challenges. The first one is the reconstruction of a reference model (baseline model for most of approaches). The second is inversion for the time-lapse changes in the parameters. Common model approach is utilizing the information contained in all available data sets to build a better reference model for time lapse inversion. Differential (Double-difference) waveform inversion allows to reduce the artifacts introduced into estimates of time-lapse parameter changes by imperfect inversion for the baseline-reference model. We propose centered differential waveform inversion (CDWI) which combines these two approaches in order to benefit from both of their features. We apply minimum support regularization commonly used with electromagnetic methods of geophysical exploration. We test the CDWI method on synthetic dataset with random noise and show that, with Minimum support regularization, it provides better resolution of velocity changes than with total variation and Tikhonov regularizations in time-lapse full-waveform inversion.

  17. Manifestly scale-invariant regularization and quantum effective operators

    CERN Document Server

    Ghilencea, D.M.

    2016-01-01

    Scale invariant theories are often used to address the hierarchy problem, however the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which break this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale invariant regularization in (classical) scale invariant theories. We use a dilaton-dependent subtraction function $\\mu(\\sigma)$ which after spontaneous breaking of scale symmetry generates the usual DR subtraction scale $\\mu(\\langle\\sigma\\rangle)$. One consequence is that "evanescent" interactions generated by scale invariance of the action in $d=4-2\\epsilon$ (but vanishing in $d=4$), give rise to new, finite quantum corrections. We find a (finite) correction $\\Delta U(\\phi,\\sigma)$ to the one-loop scalar potential for $\\phi$ and $\\sigma$, beyond the Coleman-Weinberg term. $\\Delta U$ is due to an evanescent correction ($\\propto\\epsilon$) to the field-dependent masses (of...

  18. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman; Ballal, Tarig; Masood, Mudassir; Al-Naffouri, Tareq Y.

    2017-01-01

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  19. Enhancing Low-Rank Subspace Clustering by Manifold Regularization.

    Science.gov (United States)

    Liu, Junmin; Chen, Yijun; Zhang, JiangShe; Xu, Zongben

    2014-07-25

    Recently, low-rank representation (LRR) method has achieved great success in subspace clustering (SC), which aims to cluster the data points that lie in a union of low-dimensional subspace. Given a set of data points, LRR seeks the lowest rank representation among the many possible linear combinations of the bases in a given dictionary or in terms of the data itself. However, LRR only considers the global Euclidean structure, while the local manifold structure, which is often important for many real applications, is ignored. In this paper, to exploit the local manifold structure of the data, a manifold regularization characterized by a Laplacian graph has been incorporated into LRR, leading to our proposed Laplacian regularized LRR (LapLRR). An efficient optimization procedure, which is based on alternating direction method of multipliers (ADMM), is developed for LapLRR. Experimental results on synthetic and real data sets are presented to demonstrate that the performance of LRR has been enhanced by using the manifold regularization.

  20. Wavelet domain image restoration with adaptive edge-preserving regularization.

    Science.gov (United States)

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  1. Fuzzy Linguistic Optimization on Multi-Attribute Machining

    Directory of Open Access Journals (Sweden)

    Tian-Syung Lan

    2010-06-01

    Full Text Available Most existing multi-attribute optimization researches for the modern CNC (computer numerical control turning industry were either accomplished within certain manufacturing circumstances, or achieved through numerous equipment operations. Therefore, a general deduction optimization scheme proposed is deemed to be necessary for the industry. In this paper, four parameters (cutting depth, feed rate, speed, tool nose runoff with three levels (low, medium, high are considered to optimize the multi-attribute (surface roughness, tool wear, and material removal rate finish turning. Through FAHP (Fuzzy Analytic Hierarchy Process with eighty intervals for each attribute, the weight of each attribute is evaluated from the paired comparison matrix constructed by the expert judgment. Additionally, twenty-seven fuzzy control rules using trapezoid membership function with respective to seventeen linguistic grades for each attribute are constructed. Considering thirty input and eighty output intervals, the defuzzifierion using center of gravity is thus completed. The TOPSIS (Technique for Order Preference by Similarity to Ideal Solution is moreover utilized to integrate and evaluate the multiple machining attributes for the Taguchi experiment, and thus the optimum general deduction parameters can then be received. The confirmation experiment for optimum general deduction parameters is furthermore performed on an ECOCA-3807 CNC lathe. It is shown that the attributes from the fuzzy linguistic optimization parameters are all significantly advanced comparing to those from benchmark. This paper not only proposes a general deduction optimization scheme using orthogonal array, but also contributes the satisfactory fuzzy linguistic approach for multiple CNC turning attributes with profound insight.

  2. A Chance for Attributable Agency.

    Science.gov (United States)

    Briegel, Hans J; Müller, Thomas

    Can we sensibly attribute some of the happenings in our world to the agency of some of the things around us? We do this all the time, but there are conceptual challenges purporting to show that attributable agency, and specifically one of its most important subspecies, human free agency, is incoherent. We address these challenges in a novel way: rather than merely rebutting specific arguments, we discuss a concrete model that we claim positively illustrates attributable agency in an indeterministic setting. The model, recently introduced by one of the authors in the context of artificial intelligence, shows that an agent with a sufficiently complex memory organization can employ indeterministic happenings in a meaningful way. We claim that these considerations successfully counter arguments against the coherence of libertarian (indeterminism-based) free will.

  3. Estimating the incidence of lung cancer attributable to occupational exposure in Iran

    Directory of Open Access Journals (Sweden)

    Mousavi-Jarrahi Yasaman

    2009-05-01

    Full Text Available Abstract Objective The aim of this study was to estimate the fraction of lung cancer incidence in Iran attributed to occupational exposures to the well-established lung cancer carcinogens, including silica, cadmium, nickel, arsenic, chromium, diesel fumes, beryllium, and asbestos. Methods Nationwide exposure to each of the mentioned carcinogens was estimated using workforce data from the Iranian population census of 1995, available from the International Labor Organization (ILO website. The prevalence of exposure to carcinogens in each industry was estimated using exposure data from the CAREX (CARcinogen EXposure database, an international occupational carcinogen information system kept and maintained by the European Union. The magnitude of the relative risk of lung cancer for each carcinogen was estimated from local and international literature. Using the Levin modified population attributable risk (incidence fraction, lung cancer incidence (as estimated by the Tehran Population-Based Cancer Registry attributable to workplace exposure to carcinogens was estimated. Results The total workforce in Iran according to the 1995 census identified 12,488,020 men and 677,469 women. Agriculture is the largest sector with 25% of the male and 0.27% of female workforce. After applying the CAREX exposure estimate to each sector, the proportion exposed to lung carcinogens was 0.08% for male workers and 0.02% for female workers. Estimating a relative risk of 1.9 (95% CI of 1.7–2.1 for high exposure and 1.3 (95% CI 1.2–1.4 for low exposure, and employing the Levin modified formula, the fraction of lung cancer attributed to carcinogens in the workplace was 1.5% (95% CI of 1.2–1.9 for females and 12% (95% CI of 10–15 for males. These fractions correspond to an estimated incidence of 1.3 and 0.08 cases of lung cancer per 100,000 population for males and females, respectively. Conclusion The incidence of lung cancer due to occupational exposure is low in

  4. BER analysis of regularized least squares for BPSK recovery

    KAUST Repository

    Ben Atitallah, Ismail; Thrampoulidis, Christos; Kammoun, Abla; Al-Naffouri, Tareq Y.; Hassibi, Babak; Alouini, Mohamed-Slim

    2017-01-01

    This paper investigates the problem of recovering an n-dimensional BPSK signal x0 ∈ {−1, 1}n from m-dimensional measurement vector y = Ax+z, where A and z are assumed to be Gaussian with iid entries. We consider two variants of decoders based on the regularized least squares followed by hard-thresholding: the case where the convex relaxation is from {−1, 1}n to ℝn and the box constrained case where the relaxation is to [−1, 1]n. For both cases, we derive an exact expression of the bit error probability when n and m grow simultaneously large at a fixed ratio. For the box constrained case, we show that there exists a critical value of the SNR, above which the optimal regularizer is zero. On the other side, the regularization can further improve the performance of the box relaxation at low to moderate SNR regimes. We also prove that the optimal regularizer in the bit error rate sense for the unboxed case is nothing but the MMSE detector.

  5. BER analysis of regularized least squares for BPSK recovery

    KAUST Repository

    Ben Atitallah, Ismail

    2017-06-20

    This paper investigates the problem of recovering an n-dimensional BPSK signal x0 ∈ {−1, 1}n from m-dimensional measurement vector y = Ax+z, where A and z are assumed to be Gaussian with iid entries. We consider two variants of decoders based on the regularized least squares followed by hard-thresholding: the case where the convex relaxation is from {−1, 1}n to ℝn and the box constrained case where the relaxation is to [−1, 1]n. For both cases, we derive an exact expression of the bit error probability when n and m grow simultaneously large at a fixed ratio. For the box constrained case, we show that there exists a critical value of the SNR, above which the optimal regularizer is zero. On the other side, the regularization can further improve the performance of the box relaxation at low to moderate SNR regimes. We also prove that the optimal regularizer in the bit error rate sense for the unboxed case is nothing but the MMSE detector.

  6. Lifshitz anomalies, Ward identities and split dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Arav, Igal; Oz, Yaron; Raviv-Moshe, Avia [Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University,55 Haim Levanon street, Tel-Aviv, 69978 (Israel)

    2017-03-16

    We analyze the structure of the stress-energy tensor correlation functions in Lifshitz field theories and construct the corresponding anomalous Ward identities. We develop a framework for calculating the anomaly coefficients that employs a split dimensional regularization and the pole residues. We demonstrate the procedure by calculating the free scalar Lifshitz scale anomalies in 2+1 spacetime dimensions. We find that the analysis of the regularization dependent trivial terms requires a curved spacetime description without a foliation structure. We discuss potential ambiguities in Lifshitz scale anomaly definitions.

  7. Lifshitz anomalies, Ward identities and split dimensional regularization

    International Nuclear Information System (INIS)

    Arav, Igal; Oz, Yaron; Raviv-Moshe, Avia

    2017-01-01

    We analyze the structure of the stress-energy tensor correlation functions in Lifshitz field theories and construct the corresponding anomalous Ward identities. We develop a framework for calculating the anomaly coefficients that employs a split dimensional regularization and the pole residues. We demonstrate the procedure by calculating the free scalar Lifshitz scale anomalies in 2+1 spacetime dimensions. We find that the analysis of the regularization dependent trivial terms requires a curved spacetime description without a foliation structure. We discuss potential ambiguities in Lifshitz scale anomaly definitions.

  8. Anonymous Credential Schemes with Encrypted Attributes

    NARCIS (Netherlands)

    Guajardo Merchan, J.; Mennink, B.; Schoenmakers, B.

    2011-01-01

    In anonymous credential schemes, users obtain credentials on certain attributes from an issuer, and later show these credentials to a relying party anonymously and without fully disclosing the attributes. In this paper, we introduce the notion of (anonymous) credential schemes with encrypted

  9. Total variation regularization for fMRI-based prediction of behavior

    Science.gov (United States)

    Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand

    2011-01-01

    While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional MRI (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioural variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the ℓ1 norm of the image gradient, a.k.a. its Total Variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification. PMID:21317080

  10. Analytic regularization of the Yukawa model at finite temperature

    International Nuclear Information System (INIS)

    Malbouisson, A.P.C.; Svaiter, N.F.; Svaiter, B.F.

    1996-07-01

    It is analysed the one-loop fermionic contribution for the scalar effective potential in the temperature dependent Yukawa model. Ir order to regularize the model a mix between dimensional and analytic regularization procedures is used. It is found a general expression for the fermionic contribution in arbitrary spacetime dimension. It is also found that in D = 3 this contribution is finite. (author). 19 refs

  11. On Regularity Criteria for the Two-Dimensional Generalized Liquid Crystal Model

    Directory of Open Access Journals (Sweden)

    Yanan Wang

    2014-01-01

    Full Text Available We establish the regularity criteria for the two-dimensional generalized liquid crystal model. It turns out that the global existence results satisfy our regularity criteria naturally.

  12. Development and Validation of the Masculine Attributes Questionnaire.

    Science.gov (United States)

    Cho, Junhan; Kogan, Steven M

    2017-07-01

    The present study describes the development and validation of the Masculine Attributes Questionnaire (MAQ). The purpose of this study was to develop a theoretically and empirically grounded measure of masculine attributes for sexual health research with African American young men. Consistent with Whitehead's theory, the MAQ items were hypothesized to comprise two components representing reputation-based and respect-based attributes. The sample included 505 African American men aged 19 to 22 years ( M = 20.29, SD = 1.10) living in resource-poor communities in the rural South. Convergent and discriminant validity of the MAQ were assessed by examining the associations of masculinity attributes with psychosocial factors. Criterion validity was assessed by examining the extent to which the MAQ subscales predicted sexual risk behavior outcomes. Consistent with study hypotheses, the MAQ was composed of (a) reputation-based attributes oriented toward sexual prowess, toughness, and authority-defying behavior and (b) respect-based attributes oriented toward economic independence, socially approved levels of hard work and education, and committed romantic relationships. Reputation-based attributes were associated positively with street code and negatively related to academic orientation, vocational engagement, and self-regulation, whereas respect-based attributes were associated positively with academic and vocational orientations and self-regulation. Finally, reputation-based attributes predicted sexual risk behaviors including concurrent sexual partnerships, multiple sexual partners, marijuana use, and incarceration, net of the influence of respect-based attributes. The development of the MAQ provides a new measure that permits systematic quantitative investigation of the associations between African American men's masculinity ideology and sexual risk behavior.

  13. 17 CFR 270.10b-1 - Definition of regular broker or dealer.

    Science.gov (United States)

    2010-04-01

    ... COMMISSION (CONTINUED) RULES AND REGULATIONS, INVESTMENT COMPANY ACT OF 1940 § 270.10b-1 Definition of regular broker or dealer. The term regular broker or dealer of an investment company shall mean: (a) One... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Definition of regular broker...

  14. Regularity conditions of the field on a toroidal magnetic surface

    International Nuclear Information System (INIS)

    Bouligand, M.

    1985-06-01

    We show that a field B vector which is derived from an analytic canonical potential on an ordinary toroidal surface is regular on this surface when the potential satisfies an elliptic equation (owing to the conservative field) subject to certain conditions of regularity of its coefficients [fr

  15. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2013-09-22

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity model. Both synthetic and field results show that: 1) LSM with a reflectivity model common for all the plane-wave gathers provides the best image when the migration velocity model is accurate, but it is more sensitive to the velocity errors, 2) the regularized plane-wave LSM is more robust in the presence of velocity errors, and 3) LSM achieves both computational and IO saving by plane-wave encoding compared to shot-domain LSM for the models tested.

  16. Income inequality and alcohol attributable harm in Australia

    Directory of Open Access Journals (Sweden)

    Chikritzhs Tanya N

    2009-02-01

    Full Text Available Abstract Background There is little research on the relationship between key socioeconomic variables and alcohol related harms in Australia. The aim of this research was to examine the relationship between income inequality and the rates of alcohol-attributable hospitalisation and death at a local-area level in Australia. Method We conducted a cross sectional ecological analysis at a Local Government Area (LGA level of associations between data on alcohol caused harms and income inequality data after adjusting for socioeconomic disadvantage and remoteness of LGAs. The main outcome measures used were matched rate ratios for four measures of alcohol caused harm; acute (primarily related to the short term consequences of drinking and chronic (primarily related to the long term consequences of drinking alcohol-attributable hospitalisation and acute and chronic alcohol-attributable death. Matching was undertaken using control conditions (non-alcohol-attributable at an LGA level. Results A total of 885 alcohol-attributable deaths and 19467 alcohol-attributable hospitalisations across all LGAs were available for analysis. After weighting by the total number of cases in each LGA, the matched rate ratios of acute and chronic alcohol-attributable hospitalisation and chronic alcohol-attributable death were associated with the squared centred Gini coefficients of LGAs. This relationship was evident after adjusting for socioeconomic disadvantage and remoteness of LGAs. For both measures of hospitalisation the relationship was curvilinear; increases in income inequality were initially associated with declining rates of hospitalisation followed by large increases as the Gini coefficient increased beyond 0.15. The pattern for chronic alcohol-attributable death was similar, but without the initial decrease. There was no association between income inequality and acute alcohol-attributable death, probably due to the relatively small number of these types of death

  17. Selection of key terrain attributes for SOC model

    DEFF Research Database (Denmark)

    Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka

    As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...... was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...

  18. Zirconium-barrier cladding attributes

    International Nuclear Information System (INIS)

    Rosenbaum, H.S.; Rand, R.A.; Tucker, R.P.; Cheng, B.; Adamson, R.B.; Davies, J.H.; Armijo, J.S.; Wisner, S.B.

    1987-01-01

    This metallurgical study of Zr-barrier fuel cladding evaluates the importance of three salient attributes: (1) metallurgical bond between the zirconium liner and the Zircaloy substrate, (2) liner thickness (roughly 10% of the total cladding wall), and (3) softness (purity). The effect that each of these attributes has on the pellet-cladding interaction (PCI) resistance of the Zr-barrier fuel was studied by a combination of analytical model calculations and laboratory experiments using an expanding mandrel technique. Each of the attributes is shown to contribute to PCI resistance. The effect of the zirconium liner on fuel behavior during off-normal events in which steam comes in contact with the zirconium surface was studied experimentally. Simulations of loss-of-coolant accident (LOCA) showed that the behavior of Zr-barrier cladding is virtually indistinguishable from that of conventional Zircaloy cladding. If steam contacts the zirconium liner surface through a cladding perforation and the fuel rod is operated under normal power conditions, the zirconium liner is oxidized more rapidly than is Zircaloy, but the oxidation rate returns to the rate of Zircaloy oxidation when the oxide phase reaches the zirconium-Zircaloy metallurgical bond

  19. Regular and context-free nominal traces

    DEFF Research Database (Denmark)

    Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca

    2017-01-01

    Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...

  20. Stabilization, pole placement, and regular implementability

    NARCIS (Netherlands)

    Belur, MN; Trentelman, HL

    In this paper, we study control by interconnection of linear differential systems. We give necessary and sufficient conditions for regular implementability of a-given linear, differential system. We formulate the problems of stabilization and pole placement as problems of finding a suitable,

  1. The effect of regular medication on the outcome of paracetamol poisoning

    DEFF Research Database (Denmark)

    Schmidt, L E; Dalhoff, K

    2002-01-01

    BACKGROUND: Patients admitted with paracetamol overdose frequently receive one or more types of regular medication that may affect the outcome of the paracetamol intoxication. AIM: To describe the use of regular medication in patients with paracetamol poisoning and to evaluate its effects...... on morbidity and mortality. METHODS: Seven hundred and thirty-seven consecutive patients admitted with paracetamol poisoning were studied and the use of regular medication was recorded. The relative risk of hepatic encephalopathy, death or liver transplantation, severe hepatic dysfunction and severe...... hepatocellular injury was evaluated by multivariate analysis. RESULTS: Regular medication was received by 332 patients (45%). Medication with benzodiazepines (105 cases), antidepressants (100 cases), neuroleptics (75 cases), paracetamol (58 cases), oral contraceptives (51 cases), beta-agonists (40 cases), opioid...

  2. Regularity dimension of sequences and its application to phylogenetic tree reconstruction

    International Nuclear Information System (INIS)

    Pham, Tuan D.

    2012-01-01

    The concept of dimension is a central development of chaos theory for studying nonlinear dynamical systems. Different types of dimensions have been derived to interpret different geometrical or physical observations. Approximate entropy and its modified methods have been introduced for studying regularity and complexity of time-series data in physiology and biology. Here, the concept of power laws and entropy measure are adopted to develop the regularity dimension of sequences to model a mathematical relationship between the frequency with which information about signal regularity changes in various scales. The proposed regularity dimension is applied to reconstruct phylogenetic trees using mitochondrial DNA (mtDNA) sequences for the family Hominidae, which can be validated according to the hypothesized evolutionary relationships between organisms.

  3. EXPOSE-R2: The Astrobiological ESA Mission on Board of the International Space Station

    Directory of Open Access Journals (Sweden)

    Elke Rabbow

    2017-08-01

    Full Text Available On July 23, 2014, the Progress cargo spacecraft 56P was launched from Baikonur to the International Space Station (ISS, carrying EXPOSE-R2, the third ESA (European Space Agency EXPOSE facility, the second EXPOSE on the outside platform of the Russian Zvezda module, with four international astrobiological experiments into space. More than 600 biological samples of archaea, bacteria (as biofilms and in planktonic form, lichens, fungi, plant seeds, triops eggs, mosses and 150 samples of organic compounds were exposed to the harsh space environment and to parameters similar to those on the Mars surface. Radiation dosimeters distributed over the whole facility complemented the scientific payload. Three extravehicular activities later the chemical samples were returned to Earth on March 2, 2016, with Soyuz 44S, having spent 588 days in space. The biological samples arrived back later, on June 18, 2016, with 45S, after a total duration in space of 531 days. The exposure of the samples to Low Earth Orbit vacuum lasted for 531 days and was divided in two parts: protected against solar irradiation during the first 62 days, followed by exposure to solar radiation during the subsequent 469 days. In parallel to the space mission, a Mission Ground Reference (MGR experiment with a flight identical Hardware and a complete flight identical set of samples was performed at the premises of DLR (German Aerospace Center in Cologne by MUSC (Microgravity User Support Center, according to the mission data either downloaded from the ISS (temperature data, facility status, inner pressure status or provided by RedShift Design and Engineering BVBA, Belgium (calculated ultra violet radiation fluence data. In this paper, the EXPOSE-R2 facility, the experimental samples, mission parameters, environmental parameters, and the overall mission and MGR sequences are described, building the background for the research papers of the individual experiments, their analysis and results.

  4. Quality attributes for mobile applications

    OpenAIRE

    Fernandes, João M.; Ferreira, André Leite

    2016-01-01

    A mobile application is a type of software application developed to run on a mobile device. The chapter discusses the main characteristics of mobile devices, since they have a great impact on mobile applications. It also presents the classification of mobile applications according to two main types: native and web-based applications. Finally, this chapter identifies the most relevant types of quality attributes for mobile applications. It shows that the relevant quality attributes for mobile ...

  5. Detection and attribution of observed impacts

    NARCIS (Netherlands)

    Cramer, W.; Yohe, G.W.; Auffhammer, M.; Huggel, C.; Molau, U.; Dias, M.A.F.S.; Leemans, R.

    2014-01-01

    This chapter synthesizes the scientific literature on the detection and attribution of observed changes in natural and human systems in response to observed recent climate change. For policy makers and the public, detection and attribution of observed impacts will be a key element to determine the

  6. Towards incorporating affective computing to virtual rehabilitation; surrogating attributed attention from posture for boosting therapy adaptation

    Science.gov (United States)

    Rivas, Jesús J.; Heyer, Patrick; Orihuela-Espina, Felipe; Sucar, Luis Enrique

    2015-01-01

    Virtual rehabilitation (VR) is a novel motor rehabilitation therapy in which the rehabilitation exercises occurs through interaction with bespoken virtual environments. These virtual environments dynamically adapt their activity to match the therapy progress. Adaptation should be guided by the cognitive and emotional state of the patient, none of which are directly observable. Here, we present our first steps towards inferring non-observable attentional state from unobtrusively observable seated posture, so that this knowledge can later be exploited by a VR platform to modulate its behaviour. The space of seated postures was discretized and 648 pictures of acted representations were exposed to crowd-evaluation to determine attributed state of attention. A semi-supervised classifier based on Na¨ıve Bayes with structural improvement was learnt to unfold a predictive relation between posture and attributed attention. Internal validity was established following a 2×5 cross-fold strategy. Following 4959 votes from crowd, classification accuracy reached a promissory 96.29% (µ±σ = 87.59±6.59) and F-measure reached 82.35% (µ ± σ = 69.72 ± 10.50). With the afforded rate of classification, we believe it is safe to claim posture as a reliable proxy for attributed attentional state. It follows that unobtrusively monitoring posture can be exploited for guiding an intelligent adaptation in a virtual rehabilitation platform. This study further helps to identify critical aspects of posture permitting inference of attention.

  7. Regularities of radiation heredity

    International Nuclear Information System (INIS)

    Skakov, M.K.; Melikhov, V.D.

    2001-01-01

    One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru

  8. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  9. Development of the Attributed Dignity Scale.

    Science.gov (United States)

    Jacelon, Cynthia S; Dixon, Jane; Knafl, Kathleen A

    2009-07-01

    A sequential, multi-method approach to instrument development beginning with concept analysis, followed by (a) item generation from qualitative data, (b) review of items by expert and lay person panels, (c) cognitive appraisal interviews, (d) pilot testing, and (e) evaluating construct validity was used to develop a measure of attributed dignity in older adults. The resulting positively scored, 23-item scale has three dimensions: Self-Value, Behavioral Respect-Self, and Behavioral Respect-Others. Item-total correlations in the pilot study ranged from 0.39 to 0.85. Correlations between the Attributed Dignity Scale (ADS) and both Rosenberg's Self-Esteem Scale (0.17) and Crowne and Marlowe's Social Desirability Scale (0.36) were modest and in the expected direction, indicating attributed dignity is a related but independent concept. Next steps include testing the ADS with a larger sample to complete factor analysis, test-retest stability, and further study of the relationships between attributed dignity and other concepts.

  10. State fusion entropy for continuous and site-specific analysis of landslide stability changing regularities

    Science.gov (United States)

    Liu, Yong; Qin, Zhimeng; Hu, Baodan; Feng, Shuai

    2018-04-01

    Stability analysis is of great significance to landslide hazard prevention, especially the dynamic stability. However, many existing stability analysis methods are difficult to analyse the continuous landslide stability and its changing regularities in a uniform criterion due to the unique landslide geological conditions. Based on the relationship between displacement monitoring data, deformation states and landslide stability, a state fusion entropy method is herein proposed to derive landslide instability through a comprehensive multi-attribute entropy analysis of deformation states, which are defined by a proposed joint clustering method combining K-means and a cloud model. Taking Xintan landslide as the detailed case study, cumulative state fusion entropy presents an obvious increasing trend after the landslide entered accelerative deformation stage and historical maxima match highly with landslide macroscopic deformation behaviours in key time nodes. Reasonable results are also obtained in its application to several other landslides in the Three Gorges Reservoir in China. Combined with field survey, state fusion entropy may serve for assessing landslide stability and judging landslide evolutionary stages.

  11. Interpersonal reactivity and the attribution of emotional reactions.

    Science.gov (United States)

    Haas, Brian W; Anderson, Ian W; Filkowski, Megan M

    2015-06-01

    The ability to identify the cause of another person's emotional reaction is an important component associated with improved success of social relationships and survival. Although many studies have investigated the mechanisms involved in emotion recognition, very little is currently known regarding the processes involved during emotion attribution decisions. Research on complementary "emotion understanding" mechanisms, including empathy and theory of mind, has demonstrated that emotion understanding decisions are often made through relatively emotion- or cognitive-based processing streams. The current study was designed to investigate the behavioral and brain mechanisms involved in emotion attribution decisions. We predicted that dual processes, emotional and cognitive, are engaged during emotion attribution decisions. Sixteen healthy adults completed the Interpersonal Reactivity Index to characterize individual differences in tendency to make emotion- versus cognitive-based interpersonal decisions. Participants then underwent functional MRI while making emotion attribution decisions. We found neuroimaging evidence that emotion attribution decisions engage a similar brain network as other forms of emotion understanding. Further, we found evidence in support of a dual processes model involved during emotion attribution decisions. Higher scores of personal distress were associated with quicker emotion attribution decisions and increased anterior insula activity. Conversely, higher scores in perspective taking were associated with delayed emotion attribution decisions and increased prefrontal cortex and premotor activity. These findings indicate that the making of emotion attribution decisions relies on dissociable emotional and cognitive processing streams within the brain. (c) 2015 APA, all rights reserved).

  12. Parenting Attributions and Attitudes in Cross-Cultural Perspective

    Science.gov (United States)

    Bornstein, Marc H.; Putnick, Diane L.; Lansford, Jennifer E.

    2011-01-01

    SYNOPSIS Objective This article used the Parenting Across Cultures Project to evaluate similarities and differences in mean levels and relative agreement between mothers’ and fathers’ attributions and attitudes in parenting in 9 countries. Design Mothers and fathers reported their perceptions of causes of successes and failures in caregiving and their progressive versus authoritarian childrearing attitudes. Gender and cultural similarities and differences in parents’ attributions and attitudes in 9 countries were analyzed: China, Colombia, Italy, Jordan, Kenya, the Philippines, Sweden, Thailand, or the United States. Results Although mothers and fathers did not differ in any attribution, mothers reported more progressive parenting attitudes and modernity of childrearing attitudes than did fathers, and fathers reported more authoritarian attitudes than did mothers. Country differences also emerged in all attributions and attitudes that were examined. Mothers’ and fathers’ attributions and their attitudes were moderately correlated, but parenting attitudes were more highly correlated in parents than were attributions. Conclusions We draw connections among the findings across the 9 countries and outline implications for understanding similarities and differences in mothers’ and fathers’ parenting attributions and attitudes. PMID:21927591

  13. RES: Regularized Stochastic BFGS Algorithm

    Science.gov (United States)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  14. A Sim(2 invariant dimensional regularization

    Directory of Open Access Journals (Sweden)

    J. Alfaro

    2017-09-01

    Full Text Available We introduce a Sim(2 invariant dimensional regularization of loop integrals. Then we can compute the one loop quantum corrections to the photon self energy, electron self energy and vertex in the Electrodynamics sector of the Very Special Relativity Standard Model (VSRSM.

  15. Irreducible descriptive sets of attributes for information systems

    KAUST Repository

    Moshkov, Mikhail

    2010-01-01

    The maximal consistent extension Ext(S) of a given information system S consists of all objects corresponding to attribute values from S which are consistent with all true and realizable rules extracted from the original information system S. An irreducible descriptive set for the considered information system S is a minimal (relative to the inclusion) set B of attributes which defines exactly the set Ext(S) by means of true and realizable rules constructed over attributes from the considered set B. We show that there exists only one irreducible descriptive set of attributes. We present a polynomial algorithm for this set construction. We also study relationships between the cardinality of irreducible descriptive set of attributes and the number of attributes in S. The obtained results will be useful for the design of concurrent data models from experimental data. © 2010 Springer-Verlag.

  16. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    Science.gov (United States)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  17. Improved Conflict Detection for Graph Transformation with Attributes

    Directory of Open Access Journals (Sweden)

    Géza Kulcsár

    2015-04-01

    Full Text Available In graph transformation, a conflict describes a situation where two alternative transformations cannot be arbitrarily serialized. When enriching graphs with attributes, existing conflict detection techniques typically report a conflict whenever at least one of two transformations manipulates a shared attribute. In this paper, we propose an improved, less conservative condition for static conflict detection of graph transformation with attributes by explicitly taking the semantics of the attribute operations into account. The proposed technique is based on symbolic graphs, which extend the traditional notion of graphs by logic formulas used for attribute handling. The approach is proven complete, i.e., any potential conflict is guaranteed to be detected.

  18. 12 CFR 407.3 - Procedures applicable to regularly scheduled meetings.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Procedures applicable to regularly scheduled meetings. 407.3 Section 407.3 Banks and Banking EXPORT-IMPORT BANK OF THE UNITED STATES REGULATIONS GOVERNING PUBLIC OBSERVATION OF EX-IM BANK MEETINGS § 407.3 Procedures applicable to regularly scheduled...

  19. The Temporal Dynamics of Regularity Extraction in Non-Human Primates

    Science.gov (United States)

    Minier, Laure; Fagot, Joël; Rey, Arnaud

    2016-01-01

    Extracting the regularities of our environment is one of our core cognitive abilities. To study the fine-grained dynamics of the extraction of embedded regularities, a method combining the advantages of the artificial language paradigm (Saffran, Aslin, & Newport, [Saffran, J. R., 1996]) and the serial response time task (Nissen & Bullemer,…

  20. Quality assurance for multileaf collimator with radiographic film exposed by slit beam

    International Nuclear Information System (INIS)

    Ma Jinli; Jiang Guoliang; Fu Xiaolong; Liao Yuan; Wu Kailiang; Zhou Lijun

    2004-01-01

    Objective: To evaluate the role of Kodak X-OMAT-V film exposed by slit beam in the check of various leaf positions of multileaf collimator(MLC), and to check the status of Varian 26 leaf pairs MLC in the Department of Radiation Oncology in Shanghai Cancer Hospital affiliated to Fudan University. Methods: At first, some position errors of different sizes were produced for different leaves so as to determine the minimal leaf position error that could be seen on film. Then, exposure conditions including the exposure dose and source to film distance were changed to find the optimal one. Finally, a Kodak X-OMAT-V film was exposed with a leaf sequence file which was designated randomly by a physicist with leaf position errors of different sizes. After the film was developed, two doctors and two physicists were invited to observe, on blind basis, in order to determine the sensitivity and specificity of the film in the check of leaf positions. Ultimately, leaf positions of the Varian 26 leaf pairs MLC were checked, in which way, the leaf motor status and the carriage stability were checked indirectly. Results: Leaf position errors no less than 0.2 mm could be found using Kodak X-OMAT-V film under the following conditions: source to film distance 100 cm, exposure dose 25 MU, which had been considered as the optimal exposure conditions. The sensitivity and specificity of this method were 73.4% and 96.4%. Any MLC leaf position errors more than 0.2 mm could not be detected. Thus, it was deemed that all leaf motors of the Varian 26 leaf pairs MLC were well in gear and the carriages were stable. Conclusions: MLC leaf position errors can be detected by Kodak X-OMAT-V film exposed by slit beam with high accuracy, but the ability to find leaf position errors with the naked eye may vary from person to person. It is proposed that the Kodak X-OMAT-V film exposed by slit beam be used to check the MLC leaf positions, i. e. the leaf motor status and carriage stability, at regular

  1. Association of Diet With Skin Histological Features in UV-B-Exposed Mice.

    Science.gov (United States)

    Bhattacharyya, Tapan K; Hsia, Yvonne; Weeks, David M; Dixon, Tatiana K; Lepe, Jessica; Thomas, J Regan

    2017-09-01

    Long-term exposure to solar radiation produces deleterious photoaging of the skin. It is not known if diet can influence skin photoaging. To study the influence of a calorie-restricted diet and an obesity diet in mice exposed to long-term UV-B irradiation to assess if there is an association between diet and histopathological response to UV-B irradiation. In this animal model study in an academic setting, the dorsal skin of SKH1 hairless mice receiving normal, calorie-restricted, and obesity diets was exposed to UV-B irradiation 3 times a week for 10 weeks and were compared with corresponding controls. The mice were placed in the following groups, with 8 animals in each group: (1) intact control (C) with regular diet and no UV-B exposure, (2) intact control with UV-B exposure (CR), (3) calorie-restricted diet (CrC), (4) calorie-restricted diet with UV-B exposure (CrR), (5) obesity diet (OC), and (6) obesity diet with UV-B exposure (OR). The experiment was conducted during October through December 2013. Tissue processing and histological analysis were completed in 2016. Histomorphometric analysis was performed on paraffin-embedded skin sections stained by histological and immunohistochemical methods for estimation of epidermal thickness, epidermal proliferating cell nuclear antigen index, collagen I, elastic fibers, fibroblasts, mast cells, dermal cellularity, and adipose layer ratio. Changes in wrinkles were noted. Hairless female mice (age range, 6-8 weeks) were obtained. With a normal diet, changes from UV-B irradiation occurred in epidermal thickness, epidermal proliferating cell nuclear antigen index, collagen I, elastic fibers, fibroblasts, and mast cells, which were modestly influenced by an obesity diet. Calorie restriction influenced the skin in nonirradiated control animals, with higher values for most variables. After UV-B exposure in animals with calorie restriction, epidermal thickness was increased, but other variables were unaffected. Animals

  2. Regularity of p(ṡ)-superharmonic functions, the Kellogg property and semiregular boundary points

    Science.gov (United States)

    Adamowicz, Tomasz; Björn, Anders; Björn, Jana

    2014-11-01

    We study various boundary and inner regularity questions for $p(\\cdot)$-(super)harmonic functions in Euclidean domains. In particular, we prove the Kellogg property and introduce a classification of boundary points for $p(\\cdot)$-harmonic functions into three disjoint classes: regular, semiregular and strongly irregular points. Regular and especially semiregular points are characterized in many ways. The discussion is illustrated by examples. Along the way, we present a removability result for bounded $p(\\cdot)$-harmonic functions and give some new characterizations of $W^{1, p(\\cdot)}_0$ spaces. We also show that $p(\\cdot)$-superharmonic functions are lower semicontinuously regularized, and characterize them in terms of lower semicontinuously regularized supersolutions.

  3. In vivo genotoxicity assessment in rats exposed to Prestige-like oil by inhalation.

    Science.gov (United States)

    Valdiglesias, Vanessa; Kiliç, Gözde; Costa, Carla; Amor-Carro, Óscar; Mariñas-Pardo, Luis; Ramos-Barbón, David; Méndez, Josefina; Pásaro, Eduardo; Laffon, Blanca

    2012-01-01

    One of the largest oil spill disasters in recent times was the accident of the oil tanker Prestige in front of the Galician coast in 2002. Thousands of people participated in the cleanup of the contaminated areas, being exposed to a complex mixture of toxic substances. Acute and prolonged respiratory symptoms and genotoxic effects were reported, although environmental exposure measurements were restricted to current determinations, such that attribution of effects observed to oil exposure is difficult to establish. The aim of this study was to analyze peripheral blood leukocytes (PBL) harvested from a rat model of subchronic exposure to a fuel oil with similar characteristics to that spilled by the Prestige tanker, in order to determine potential genotoxic effects under strictly controlled, in vivo exposure. Wistar Han and Brown Norway rats were exposed to the oil for 3 wk, and micronucleus test (MN) and comet assay, standard and modified with 8-oxoguanine DNA glycosylase (OGG1) enzyme, were employed to assess genotoxicity 72 h and 15 d after the last exposure. In addition, the potential effects of oil exposure on DNA repair capacity were determined by means of mutagen sensitivity assay. Results obtained from this study showed that inhalation oil exposure induced DNA damage in both Brown Norway and Wistar Han rats, especially in those animals evaluated 15 d after exposure. Although alterations in the DNA repair responses were noted, the sensitivity to oil substances varied depending on rat strain. Data support previous positive genotoxicity results reported in humans exposed to Prestige oil during cleanup tasks.

  4. Regular black holes in Einstein-Gauss-Bonnet gravity

    Science.gov (United States)

    Ghosh, Sushant G.; Singh, Dharm Veer; Maharaj, Sunil D.

    2018-05-01

    Einstein-Gauss-Bonnet theory, a natural generalization of general relativity to a higher dimension, admits a static spherically symmetric black hole which was obtained by Boulware and Deser. This black hole is similar to its general relativity counterpart with a curvature singularity at r =0 . We present an exact 5D regular black hole metric, with parameter (k >0 ), that interpolates between the Boulware-Deser black hole (k =0 ) and the Wiltshire charged black hole (r ≫k ). Owing to the appearance of the exponential correction factor (e-k /r2), responsible for regularizing the metric, the thermodynamical quantities are modified, and it is demonstrated that the Hawking-Page phase transition is achievable. The heat capacity diverges at a critical radius r =rC, where incidentally the temperature is maximum. Thus, we have a regular black hole with Cauchy and event horizons, and evaporation leads to a thermodynamically stable double-horizon black hole remnant with vanishing temperature. The entropy does not satisfy the usual exact horizon area result of general relativity.

  5. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  6. Critical Behavior of the Annealed Ising Model on Random Regular Graphs

    Science.gov (United States)

    Can, Van Hao

    2017-11-01

    In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.

  7. An Exploration of EFL Teachers' Attributions

    Science.gov (United States)

    Ghonsooly, Behzad; Ghanizadeh, Afsaneh; Ghazanfari, Mohammad; Ghabanchi, Zargham

    2015-01-01

    The present study investigated English as a foreign language (EFL) teachers' attributions of success and failure. It also set out to investigate whether these attributions vary by teachers' age, teaching experience, gender and educational level. To do so, 200 EFL teachers were selected according to convenience sampling among EFL teachers teaching…

  8. Crisis Workers' Attributions for Domestic Violence.

    Science.gov (United States)

    Madden, Margaret E.

    Attributions affect coping with victimization. Battered women who blame their husbands' moods are less likely to leave than are women who blame their husbands' permanent characteristics for the violence. Abused women often have repeated contacts with crisis intervention workers and the attitudes of those workers may affect the attributions made by…

  9. The relationship between synchronization and percolation for regular networks

    Science.gov (United States)

    Li, Zhe; Ren, Tao; Xu, Yanjie; Jin, Jianyu

    2018-02-01

    Synchronization and percolation are two essential phenomena in complex dynamical networks. They have been studied widely, but previously treated as unrelated. In this paper, the relationship between synchronization and percolation are revealed for regular networks. Firstly, we discovered a bridge between synchronization and percolation by using the eigenvalues of the Laplacian matrix to describe the synchronizability and using the eigenvalues of the adjacency matrix to describe the percolation threshold. Then, we proposed a method to find the relationship for regular networks based on the topology of networks. Particularly, if the degree distribution of the network is subject to delta function, we show that only the eigenvalues of the adjacency matrix need to be calculated. Finally, several examples are provided to demonstrate how to apply our proposed method to discover the relationship between synchronization and percolation for regular networks.

  10. Immunological alterations in individuals exposed to metal(loid)s in the Panasqueira mining area, Central Portugal.

    Science.gov (United States)

    Coelho, Patrícia; García-Lestón, Julia; Costa, Solange; Costa, Carla; Silva, Susana; Fuchs, Dietmar; Geisler, Simon; Dall'Armi, Valentina; Zoffoli, Roberto; Bonassi, Stefano; Pásaro, Eduardo; Laffon, Blanca; Teixeira, João Paulo

    2014-03-15

    Environmental studies performed in Panasqueira mine area (central Portugal) identified high concentrations of several metal(loid)s in environmental media, and individuals environmentally and occupationally exposed showed higher levels of As, Cr, Mg, Mn, Mo, Pb and Zn in blood, urine, hair and nails when compared to unexposed controls. To evaluate the presence of immunological alterations attributable to environmental contamination, we quantified neopterin, kynurenine, tryptophan, and nitrite concentrations in plasma, and analysed the percentage of several lymphocytes subsets, namely CD3(+), CD4(+) and CD8(+) T-cells, CD19(+) B-cells, and CD16(+)56(+) natural killer (NK) cells in a group of individuals previously tested for metal(loid) levels in different biological matrices. The environmentally exposed group had significantly lower levels of %CD8(+) and higher CD4(+)/CD8(+) ratios, whereas the occupationally exposed individuals showed significant decreases in %CD3(+) and %CD4(+), and significant increases in %CD16(+)56(+), when compared to controls. Analysed biomarkers were found to be influenced by age, particularly neopterin, kynurenine and kynurenine to tryptophan ratio (Kyn/Trp) with significantly higher levels in older individuals, and %CD3(+), %CD8(+) and %CD19(+) with significantly lower values in older individuals. Males environmentally exposed showed significantly lower values of %CD19(+) when compared to control females. The concentration of Pb in toenails was associated to the level of neopterin, kynurenine and Kyn/Trp ratio (all direct), and the concentration of Mn in blood to the level of %CD8(+), %CD19(+) (both inverse) and CD4(+)/CD8(+) ratio (direct). Overall our results show that the metal(loid) contamination in Panasqueira mine area induced immunotoxic effects in exposed populations, possibly increasing susceptibility to diseases. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Effect of Restricting Access to Health Care on Health Expenditures among Asylum-Seekers and Refugees: A Quasi-Experimental Study in Germany, 1994-2013.

    Directory of Open Access Journals (Sweden)

    Kayvan Bozorgmehr

    Full Text Available Access to health care for asylum-seekers and refugees (AS&R in Germany is initially restricted before regular access is granted, allegedly leading to delayed care and increasing costs of care. We analyse the effects of (a restricted access; and (b two major policy reforms (1997, 2007 on incident health expenditures for AS&R in 1994-2013.We used annual, nation-wide, aggregate data of the German Federal Statistics Office (1994-2013 to compare incident health expenditures among AS&R with restricted access (exposed to AS&R with regular access (unexposed. We calculated incidence rate differences (∆IRt and rate ratios (IRRt, as well as attributable fractions among the exposed (AFe and the total population (AFp. The effects of between-group differences in need, and of policy reforms, on differences in per capita expenditures were assessed in (segmented linear regression models. The exposed and unexposed groups comprised 4.16 and 1.53 million person-years. Per capita expenditures (1994-2013 were higher in the group with restricted access in absolute (∆IRt = 375.80 Euros [375.77; 375.89] and relative terms (IRR = 1.39. The AFe was 28.07% and the AFp 22.21%. Between-group differences in mean age and in the type of accommodation were the main independent predictors of between-group expenditure differences. Need variables explained 50-75% of the variation in between-group differences over time. The 1997 policy reform significantly increased ∆IRt adjusted for secular trends and between-group differences in age (by 600.0 Euros [212.6; 986.2] and sex (by 867.0 Euros [390.9; 1342.5]. The 2007 policy reform had no such effect.The cost of excluding AS&R from health care appears ultimately higher than granting regular access to care. Excess expenditures attributable to the restriction were substantial and could not be completely explained by differences in need. An evidence-informed discourse on access to health care for AS&R in Germany is needed; it

  12. A Pure Object-Oriented Embedding of Attribute Grammars

    NARCIS (Netherlands)

    Sloane, A.M.; Kats, L.C.L.; Visser, E.

    2010-01-01

    Attribute grammars are a powerful specification paradigm for many language processing tasks, particularly semantic analysis of programming languages. Recent attribute grammar systems use dynamic scheduling algorithms to evaluate attributes by need. In this paper, we show how to remove the need for a

  13. Annotation of Regular Polysemy

    DEFF Research Database (Denmark)

    Martinez Alonso, Hector

    Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...

  14. Viscous Regularization of the Euler Equations and Entropy Principles

    KAUST Repository

    Guermond, Jean-Luc

    2014-03-11

    This paper investigates a general class of viscous regularizations of the compressible Euler equations. A unique regularization is identified that is compatible with all the generalized entropies, à la [Harten et al., SIAM J. Numer. Anal., 35 (1998), pp. 2117-2127], and satisfies the minimum entropy principle. A connection with a recently proposed phenomenological model by [H. Brenner, Phys. A, 370 (2006), pp. 190-224] is made. © 2014 Society for Industrial and Applied Mathematics.

  15. Use of regularized algebraic methods in tomographic reconstruction

    International Nuclear Information System (INIS)

    Koulibaly, P.M.; Darcourt, J.; Blanc-Ferraud, L.; Migneco, O.; Barlaud, M.

    1997-01-01

    The algebraic methods are used in emission tomography to facilitate the compensation of attenuation and of Compton scattering. We have tested on a phantom the use of a regularization (a priori introduction of information), as well as the taking into account of spatial resolution variation with the depth (SRVD). Hence, we have compared the performances of the two methods by back-projection filtering (BPF) and of the two algebraic methods (AM) in terms of FWHM (by means of a point source), of the reduction of background noise (σ/m) on the homogeneous part of Jaszczak's phantom and of reconstruction speed (time unit = BPF). The BPF methods make use of a grade filter (maximal resolution, no noise treatment), single or associated with a Hann's low-pass (f c = 0.4), as well as of an attenuation correction. The AM which embody attenuation and scattering corrections are, on one side, the OS EM (Ordered Subsets, partitioning and rearranging of the projection matrix; Expectation Maximization) without regularization or SRVD correction, and, on the other side, the OS MAP EM (Maximum a posteriori), regularized and embodying the SRVD correction. A table is given containing for each used method (grade, Hann, OS EM and OS MAP EM) the values of FWHM, σ/m and time, respectively. One can observe that the OS MAP EM algebraic method allows ameliorating both the resolution, by taking into account the SRVD in the reconstruction process and noise treatment by regularization. In addition, due to the OS technique the reconstruction times are acceptable

  16. Ensemble Kalman filter regularization using leave-one-out data cross-validation

    KAUST Repository

    Rayo Schiappacasse, Lautaro Jeró nimo; Hoteit, Ibrahim

    2012-01-01

    In this work, the classical leave-one-out cross-validation method for selecting a regularization parameter for the Tikhonov problem is implemented within the EnKF framework. Following the original concept, the regularization parameter is selected

  17. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-01-01

    random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various

  18. Contribution to the monitoring of workers exposed to non-transferable uranium compounds

    International Nuclear Information System (INIS)

    Camarasa, J.; Chalabreysse, J.

    1980-01-01

    After a short review of the present knowledge on uranium (metabolism, toxicity, principles of radiotoxicological monitoring), the authors' experience in the surveillance of workers exposed to natural non-transferable uranium compounds (oxides, tetrafluorides) is presented. When setting up urinary controls in a workers' population, a number of difficulties were met in the way of collecting urine samples, obtaining samples free of exogen contribution, interpreting results. The working environment was also studied: three types of pollution measurements were carried out: on the atmosphere at fixed places by measuring the radioactivity of air sample, on work-places and workers by chemical analysis and counting of uranium. Original graphs on work-place monitoring are up-dated regularly. Workers' surveillance by urinary and working condition controls are now well codified. However, further studies will be carried out on man, on working atmospheres, and on the substances handled. The surveillance will then cover working conditions from all points of view [fr

  19. An Attributional Analysis of Reactions to Negative Emotions.

    Science.gov (United States)

    Karasawa, Kaori

    1995-01-01

    Three studies examined observers' attributions and reactions to negative emotional displays, as well as expressers' expectations about others' reactions. Analysis revealed that people attribute others' negative emotions equally to situational factors and dispositions, whereas their own emotions are attributed to the situation more than to…

  20. Verification of classified fissile material using unclassified attributes

    International Nuclear Information System (INIS)

    Nicholas, N.J.; Fearey, B.L.; Puckett, J.M.; Tape, J.W.

    1998-01-01

    This paper reports on the most recent efforts of US technical experts to explore verification by IAEA of unclassified attributes of classified excess fissile material. Two propositions are discussed: (1) that multiple unclassified attributes could be declared by the host nation and then verified (and reverified) by the IAEA in order to provide confidence in that declaration of a classified (or unclassified) inventory while protecting classified or sensitive information; and (2) that attributes could be measured, remeasured, or monitored to provide continuity of knowledge in a nonintrusive and unclassified manner. They believe attributes should relate to characteristics of excess weapons materials and should be verifiable and authenticatable with methods usable by IAEA inspectors. Further, attributes (along with the methods to measure them) must not reveal any classified information. The approach that the authors have taken is as follows: (1) assume certain attributes of classified excess material, (2) identify passive signatures, (3) determine range of applicable measurement physics, (4) develop a set of criteria to assess and select measurement technologies, (5) select existing instrumentation for proof-of-principle measurements and demonstration, and (6) develop and design information barriers to protect classified information. While the attribute verification concepts and measurements discussed in this paper appear promising, neither the attribute verification approach nor the measurement technologies have been fully developed, tested, and evaluated

  1. Decentralized formation of random regular graphs for robust multi-agent networks

    KAUST Repository

    Yazicioglu, A. Yasin; Egerstedt, Magnus; Shamma, Jeff S.

    2014-01-01

    systems. One family of robust graphs is the random regular graphs. In this paper, we present a locally applicable reconfiguration scheme to build random regular graphs through self-organization. For any connected initial graph, the proposed scheme

  2. The Effects of Regular Exercise on the Physical Fitness Levels

    Science.gov (United States)

    Kirandi, Ozlem

    2016-01-01

    The purpose of the present research is investigating the effects of regular exercise on the physical fitness levels among sedentary individuals. The total of 65 sedentary male individuals between the ages of 19-45, who had never exercises regularly in their lives, participated in the present research. Of these participants, 35 wanted to be…

  3. Variable precision rough set for multiple decision attribute analysis

    Institute of Scientific and Technical Information of China (English)

    Lai; Kin; Keung

    2008-01-01

    A variable precision rough set (VPRS) model is used to solve the multi-attribute decision analysis (MADA) problem with multiple conflicting decision attributes and multiple condition attributes. By introducing confidence measures and a β-reduct, the VPRS model can rationally solve the conflicting decision analysis problem with multiple decision attributes and multiple condition attributes. For illustration, a medical diagnosis example is utilized to show the feasibility of the VPRS model in solving the MADA...

  4. Simple Multi-Authority Attribute-Based Encryption for Short Messages

    OpenAIRE

    Viktoria I. Villanyi

    2016-01-01

    Central authority free multi-authority attribute based encryption scheme for short messages will be presented. Several multi-authority attribute based encryption schemes were recently proposed. We can divide these schemes into two groups, one of them are the ciphertext-policy attribute based encryption schemes (CP-ABE), the another one are the key-policy attribute based encryption schemes (KP-ABE). In our new multi-authority attribute based encryption scheme we combine them: the access struct...

  5. Thermodynamics of a class of regular black holes with a generalized uncertainty principle

    Science.gov (United States)

    Maluf, R. V.; Neves, Juliano C. S.

    2018-05-01

    In this article, we present a study on thermodynamics of a class of regular black holes. Such a class includes Bardeen and Hayward regular black holes. We obtained thermodynamic quantities like the Hawking temperature, entropy, and heat capacity for the entire class. As part of an effort to indicate some physical observable to distinguish regular black holes from singular black holes, we suggest that regular black holes are colder than singular black holes. Besides, contrary to the Schwarzschild black hole, that class of regular black holes may be thermodynamically stable. From a generalized uncertainty principle, we also obtained the quantum-corrected thermodynamics for the studied class. Such quantum corrections provide a logarithmic term for the quantum-corrected entropy.

  6. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  7. Effects of Irregular Bridge Columns and Feasibility of Seismic Regularity

    Science.gov (United States)

    Thomas, Abey E.

    2018-05-01

    Bridges with unequal column height is one of the main irregularities in bridge design particularly while negotiating steep valleys, making the bridges vulnerable to seismic action. The desirable behaviour of bridge columns towards seismic loading is that, they should perform in a regular fashion, i.e. the capacity of each column should be utilized evenly. But, this type of behaviour is often missing when the column heights are unequal along the length of the bridge, allowing short columns to bear the maximum lateral load. In the present study, the effects of unequal column height on the global seismic performance of bridges are studied using pushover analysis. Codes such as CalTrans (Engineering service center, earthquake engineering branch, 2013) and EC-8 (EN 1998-2: design of structures for earthquake resistance. Part 2: bridges, European Committee for Standardization, Brussels, 2005) suggests seismic regularity criterion for achieving regular seismic performance level at all the bridge columns. The feasibility of adopting these seismic regularity criterions along with those mentioned in literatures will be assessed for bridges designed as per the Indian Standards in the present study.

  8. Fermion-number violation in regularizations that preserve fermion-number symmetry

    Science.gov (United States)

    Golterman, Maarten; Shamir, Yigal

    2003-01-01

    There exist both continuum and lattice regularizations of gauge theories with fermions which preserve chiral U(1) invariance (“fermion number”). Such regularizations necessarily break gauge invariance but, in a covariant gauge, one recovers gauge invariance to all orders in perturbation theory by including suitable counterterms. At the nonperturbative level, an apparent conflict then arises between the chiral U(1) symmetry of the regularized theory and the existence of ’t Hooft vertices in the renormalized theory. The only possible resolution of the paradox is that the chiral U(1) symmetry is broken spontaneously in the enlarged Hilbert space of the covariantly gauge-fixed theory. The corresponding Goldstone pole is unphysical. The theory must therefore be defined by introducing a small fermion-mass term that breaks explicitly the chiral U(1) invariance and is sent to zero after the infinite-volume limit has been taken. Using this careful definition (and a lattice regularization) for the calculation of correlation functions in the one-instanton sector, we show that the ’t Hooft vertices are recovered as expected.

  9. Attributes of God: Conceptual Foundations of a Foundational Belief.

    Science.gov (United States)

    Shtulman, Andrew; Lindeman, Marjaana

    2016-04-01

    Anthropomorphism, or the attribution of human properties to nonhuman entities, is often posited as an explanation for the origin and nature of God concepts, but it remains unclear which human properties we tend to attribute to God and under what conditions. In three studies, participants decided whether two types of human properties-psychological (mind-dependent) properties and physiological (body-dependent) properties-could or could not be attributed to God. In Study 1 (n = 1,525), participants made significantly more psychological attributions than physiological attributions, and the frequency of those attributions was correlated both with participants' religiosity and with their attribution of abstract, theological properties. In Study 2 (n = 99) and Study 3 (n = 138), participants not only showed the same preference for psychological properties but were also significantly faster, more consistent, and more confident when attributing psychological properties to God than when attributing physiological properties. And when denying properties to God, they showed the reverse pattern-that is, they were slower, less consistent, and less confident when denying psychological properties than when denying physiological properties. These patterns were observed both in a predominantly Christian population (Study 2) and a predominantly Hindu population (Study 3). Overall, we argue that God is conceptualized not as a person in general but as an agent in particular, attributed a mind by default but attributed a body only upon further consideration. Copyright © 2015 Cognitive Science Society, Inc.

  10. Construction of normal-regular decisions of Bessel typed special system

    Science.gov (United States)

    Tasmambetov, Zhaksylyk N.; Talipova, Meiramgul Zh.

    2017-09-01

    Studying a special system of differential equations in the separate production of the second order is solved by the degenerate hypergeometric function reducing to the Bessel functions of two variables. To construct a solution of this system near regular and irregular singularities, we use the method of Frobenius-Latysheva applying the concepts of rank and antirank. There is proved the basic theorem that establishes the existence of four linearly independent solutions of studying system type of Bessel. To prove the existence of normal-regular solutions we establish necessary conditions for the existence of such solutions. The existence and convergence of a normally regular solution are shown using the notion of rank and antirank.

  11. A Multi-Session Attribution Modification Program for Children with Aggressive Behaviour: Changes in Attributions, Emotional Reaction Estimates, and Self-Reported Aggression.

    Science.gov (United States)

    Vassilopoulos, Stephanos P; Brouzos, Andreas; Andreou, Eleni

    2015-09-01

    Research suggests that aggressive children are prone to over-attribute hostile intentions to peers. The current study investigated whether this attributional style can be altered using a Cognitive Bias Modification of Interpretations (CBM-I) procedure. A sample of 10-12-year-olds selected for displaying aggressive behaviours was trained over three sessions to endorse benign rather than hostile attributions in response to ambiguous social scenarios. Compared to a test-retest control group (n = 18), children receiving CBM-I (n = 16) were less likely to endorse hostile attributions and more likely to endorse benign attributions in response to a new set of ambiguous social situations. Furthermore, aggressive behaviour scores reduced more in the trained group than in the untrained controls. Children who received attribution training also reported less perceived anger and showed a trend to report more self-control than those in the control group. Implications of these findings are discussed.

  12. Linearly Ordered Attribute Grammar Scheduling Using SAT-Solving

    NARCIS (Netherlands)

    Bransen, Jeroen; van Binsbergen, L.Thomas; Claessen, Koen; Dijkstra, Atze

    2015-01-01

    Many computations over trees can be specified using attribute grammars. Compilers for attribute grammars need to find an evaluation order (or schedule) in order to generate efficient code. For the class of linearly ordered attribute grammars such a schedule can be found statically, but this problem

  13. 3D first-arrival traveltime tomography with modified total variation regularization

    Science.gov (United States)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  14. Annotation of regular polysemy and underspecification

    DEFF Research Database (Denmark)

    Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria

    2013-01-01

    We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...

  15. Supervised scale-regularized linear convolutionary filters

    DEFF Research Database (Denmark)

    Loog, Marco; Lauze, Francois Bernard

    2017-01-01

    also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...

  16. Morphosemantic Attributes of Meetei Proverbs

    Directory of Open Access Journals (Sweden)

    Lourembam Surjit Singh

    2015-06-01

    Full Text Available This study proposes to investigate the functions of morphosemantic in Meetei proverbs, particularly the attribution of different meanings of the lexical items in Meetei Proverbial verbs. Meetei society has been using proverbs in the all ages, stages of development, social changes, and cultural diversifications to mark their wisdom of social expertise. Meetei used proverbs as an important aspect of verbal discourses within the socio-cultural and ethno-civilization contexts in which skills, knowledge, ideas, emotion, and experiences are communicating. The language used in proverbs reflects the Meetei’s status of life, food habits, belief systems, philosophy, cultural and social orientations. At the same time, various meanings attribute in Meetei proverbs in the forms of figurative, witty, pithy, didactic etc. The construction of these forms are grammatically insightful thereby creating spaces for a whole range of possibilities for investigating the features, functions and structure of verbal inflectional markers occurred in Meetei proverbial sentences. Keywords: Proverbs, morphosemantics, features of lexical items, attributes of meanings and language

  17. Formation factor of regular porous pattern in poly-α-methylstyrene film

    International Nuclear Information System (INIS)

    Yang Ruizhuang; Xu Jiajing; Gao Cong; Ma Shuang; Chen Sufen; Luo Xuan; Fang Yu; Li Bo

    2015-01-01

    Regular poly-α-methylstyrene (PAMS) porous film with macron-sized cells was prepared by casting the solution in the condition with high humidity. In this paper, the effects of the molecular weight of PAMS, PAMS concentration, humidity, temperature, volatile solvents and the thickness of liquid of solution on formation of regular porous pattern in PAMS film were discussed. The results show that these factors significantly affect the pore size and the pore distribution. The capillary force and Benard-Marangoni convection are main driving forces for the water droplet moving and making pores regular arrangement. (authors)

  18. Ensemble Kalman filter regularization using leave-one-out data cross-validation

    KAUST Repository

    Rayo Schiappacasse, Lautaro Jerónimo

    2012-09-19

    In this work, the classical leave-one-out cross-validation method for selecting a regularization parameter for the Tikhonov problem is implemented within the EnKF framework. Following the original concept, the regularization parameter is selected such that it minimizes the predictive error. Some ideas about the implementation, suitability and conceptual interest of the method are discussed. Finally, what will be called the data cross-validation regularized EnKF (dCVr-EnKF) is implemented in a 2D 2-phase synthetic oil reservoir experiment and the results analyzed.

  19. The persistence of the attentional bias to regularities in a changing environment.

    Science.gov (United States)

    Yu, Ru Qi; Zhao, Jiaying

    2015-10-01

    The environment often is stable, but some aspects may change over time. The challenge for the visual system is to discover and flexibly adapt to the changes. We examined how attention is shifted in the presence of changes in the underlying structure of the environment. In six experiments, observers viewed four simultaneous streams of objects while performing a visual search task. In the first half of each experiment, the stream in the structured location contained regularities, the shapes in the random location were randomized, and gray squares appeared in two neutral locations. In the second half, the stream in the structured or the random location may change. In the first half of all experiments, visual search was facilitated in the structured location, suggesting that attention was consistently biased toward regularities. In the second half, this bias persisted in the structured location when no change occurred (Experiment 1), when the regularities were removed (Experiment 2), or when new regularities embedded in the original or novel stimuli emerged in the previously random location (Experiments 3 and 6). However, visual search was numerically but no longer reliably faster in the structured location when the initial regularities were removed and new regularities were introduced in the previously random location (Experiment 4), or when novel random stimuli appeared in the random location (Experiment 5). This suggests that the attentional bias was weakened. Overall, the results demonstrate that the attentional bias to regularities was persistent but also sensitive to changes in the environment.

  20. Nutritional Status among the Children of Age Group 5-14 Years in Selected Arsenic Exposed and Non-Exposed Areas of Bangladesh.

    Science.gov (United States)

    Rezaul Karim, Mohammad; Ahmad, Sk Akhtar

    2014-12-01

    To assess and compare the nutritional status of children aged 5-14 years in arsenic exposed and non- exposed areas. It was a cross sectional study conducted on 600 children of age 5-14 years from arsenic exposed and non-exposed areas in Bangladesh. Designed questionnaire and check list were used for collection of data. To estimate BMI necessary anthropometric measurements of the studied children were done. Dietary intakes of the study children were assessed using 24-hours recall method. The difference of socio-economic conditions between the children of exposed area and non-exposed area was not significant. On an average the body mass index was found to be significantly (p < 0.01) lower among the children of arsenic exposed area (49%) in comparison to that of children in non-exposed area (38%). Stunting (p < 0.01), wasting (p < 0.05) and underweight (p < 0.05) were significantly higher in exposed group in comparison to non-exposed group. No significant difference of nutrition intake was found between exposed and non-exposed children as well as thin and normal children. In this study children exposed to arsenic contaminated water were found to be suffered from lower nutritional status.

  1. Attributing Hacks

    OpenAIRE

    Liu, Ziqi; Smola, Alexander J.; Soska, Kyle; Wang, Yu-Xiang; Zheng, Qinghua; Zhou, Jun

    2016-01-01

    In this paper we describe an algorithm for estimating the provenance of hacks on websites. That is, given properties of sites and the temporal occurrence of attacks, we are able to attribute individual attacks to joint causes and vulnerabilities, as well as estimating the evolution of these vulnerabilities over time. Specifically, we use hazard regression with a time-varying additive hazard function parameterized in a generalized linear form. The activation coefficients on each feature are co...

  2. Variational regularization of 3D data experiments with Matlab

    CERN Document Server

    Montegranario, Hebert

    2014-01-01

    Variational Regularization of 3D Data provides an introduction to variational methods for data modelling and its application in computer vision. In this book, the authors identify interpolation as an inverse problem that can be solved by Tikhonov regularization. The proposed solutions are generalizations of one-dimensional splines, applicable to n-dimensional data and the central idea is that these splines can be obtained by regularization theory using a trade-off between the fidelity of the data and smoothness properties.As a foundation, the authors present a comprehensive guide to the necessary fundamentals of functional analysis and variational calculus, as well as splines. The implementation and numerical experiments are illustrated using MATLAB®. The book also includes the necessary theoretical background for approximation methods and some details of the computer implementation of the algorithms. A working knowledge of multivariable calculus and basic vector and matrix methods should serve as an adequat...

  3. Structural characterization of the packings of granular regular polygons.

    Science.gov (United States)

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  4. Mediated Ciphertext-Policy Attribute-Based Encryption and Its Application

    NARCIS (Netherlands)

    Ibraimi, L.; Petkovic, M.; Nikova, S.I.; Hartel, Pieter H.; Jonker, Willem; Youm, Heung Youl; Yung, Moti

    2009-01-01

    In Ciphertext-Policy Attribute-Based Encryption (CP-ABE), a user secret key is associated with a set of attributes, and the ciphertext is associated with an access policy over attributes. The user can decrypt the ciphertext if and only if the attribute set of his secret key satisfies the access

  5. Use of seismic attributes for sediment classification

    Directory of Open Access Journals (Sweden)

    Fabio Radomille Santana

    2015-04-01

    Full Text Available A study to understand the relationships between seismic attributes extracted from 2D high-resolution seismic data and the seafloor's sediments of the surveyed area. As seismic attributes are features highly influenced by the medium through which the seismic waves are propagated, the authors can assume that it would be possible to characterise the geological nature of the seafloor by using these attributes. Herein, a survey was performed on the continental margin of the South Shetland Islands in Antarctica, where both 2D high-resolution seismic data and sediment gravity cores samples were simultaneously acquired. A computational script was written to extract the seismic attributes from the data, which have been statistically analysed with clustering analyses, such as principal components analysis, dendrograms and k-means classification. The extracted seismic attributes are the amplitude, the instantaneous phase, the instantaneous frequency, the envelope, the time derivative of the envelope, the second derivative of the envelope and the acceleration of phase. Statistical evaluation showed that geological classification of the seafloor's sediments is possible by associating these attributes according to their coherence. The methodologies here developed seem to be appropriate for glacio-marine environment and coarse-to-medium silt sediment found in the study area and may be applied to other regions in the same geological conditions.

  6. Assessment of Heavy Metals on Occupationally Exposed Workers from Hair Analysis

    Directory of Open Access Journals (Sweden)

    E. Damastuti

    2017-12-01

    Full Text Available The use of human hair as a tool in assessing changes and abnormalities in human bodies has been increasing for last decades since it may reflect the health status or environmental condition of habitation or working place of individuals as well as population groups. Compared to other body tissue or fluids, hair provides an ease of elemental analysis especially in reflecting the long-term exposure. This research was conducted to determine the elemental content especially heavy metals, since they are bioaccumulated in human body organs and impact human health, in hair of workshop workers and traffic services officers as exposed groups and its comparison with control group and references data for assessing of occupational exposure. Thirty-five automotive workshop workers and 32 traffic services officers’ hair specimens were collected in Bandung, while hair specimens of the control group were collected from 43 healthy individuals. The elemental concentrations in hair specimen were analyzed using neutron activation analysis (NAA for mercury and chromium, and atomic absorption spectrometry (AAS for lead and arsenic.  The accuracy of the method was evaluated using GBW 07601 human hair certified reference material (CRM and it was found to give good results in accordance with the certificate values. It was found that chromium, lead, and arsenic hair concentration in exposed groups (0.88, 10.7, and 0.051 mg/kg, respectively were higher than in control group (0.27, 4.52, and 0.045 mg/kg, respectively, while mercury hair concentration of traffic services officers were higher than control group but mercury hair concentration of automotive workshop workers were lower than in control group (1.41 mg/kg. The t-test statistical results shown that mercury concentrations in one exposed group did not differ significantly from the control, but other exposed groups showed otherwise. The level of mercury in hair is strongly attributed not only to environmental

  7. Regularization and the potential of effective field theory in nucleon-nucleon scattering

    International Nuclear Information System (INIS)

    Phillips, D.R.

    1998-04-01

    This paper examines the role that regularization plays in the definition of the potential used in effective field theory (EFT) treatments of the nucleon-nucleon interaction. The author considers N N scattering in S-wave channels at momenta well below the pion mass. In these channels (quasi-)bound states are present at energies well below the scale m π 2 /M expected from naturalness arguments. He asks whether, in the presence of such a shallow bound state, there is a regularization scheme which leads to an EFT potential that is both useful and systematic. In general, if a low-lying bound state is present then cutoff regularization leads to an EFT potential which is useful but not systematic, and dimensional regularization with minimal subtraction leads to one which is systematic but not useful. The recently-proposed technique of dimensional regularization with power-law divergence subtraction allows the definition of an EFT potential which is both useful and systematic

  8. Development and Validation of the Poverty Attributions Survey

    Science.gov (United States)

    Bennett, Robert M.; Raiz, Lisa; Davis, Tamara S.

    2016-01-01

    This article describes the process of developing and testing the Poverty Attribution Survey (PAS), a measure of poverty attributions. The PAS is theory based and includes original items as well as items from previously tested poverty attribution instruments. The PAS was electronically administered to a sample of state-licensed professional social…

  9. 12 CFR 725.3 - Regular membership.

    Science.gov (United States)

    2010-01-01

    ... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit....5(b) of this part, and forwarding with its completed application funds equal to one-half of this... 1, 1979, is not required to forward these funds to the Facility until October 1, 1979. (3...

  10. Delayed radiation injury of gut-exposed and gut-shielded mice. II. The decrement in life span

    International Nuclear Information System (INIS)

    Spalding, J.F.; Archuleta, R.F.; Prine, J.R.

    1978-01-01

    Two mouse strains (RF/J and C57B1/6J) were exposed to x-ray doses totaling 400, 800, and 1200 rad. Total doses were given in 200-rad fractions at 7-day intervals to the whole body, gut only, or bone tissue with the gut shielded. Animals were anesthetized during exposure. Two control groups were used. A sham control group was anesthetized but not exposed to x rays, and another control group received neither anesthesia nor x-radiation. All mice were retained in a standard laboratory environment for observations on life span and histopathology at death. Life shortening was observed in all irradiated groups of strain RF/J mice and was attributed primarily to an increase in incidence and/or earlier onset of neoplasia. Life shortening was observed in the C57B1/6J whole-body exposed mice, but the effect appeared to be noncarcinogenic. Shielding of the bone or gut tissue proved to have a 100% sparing effect in strain C57 mice and none in strain RF mice. In both mouse strains, the sham control groups (anesthetized but not irradiated) showed approximately 8% life shortening below the non-anesthetized control groups and increased incidences of neoplasia of approximately 40%, suggesting that sodium pentabarbital may be as carcinogenic as x-radiation

  11. Quality Attribute Design Primitives

    National Research Council Canada - National Science Library

    Bass, Len

    2000-01-01

    This report focuses on the quality attribute aspects of mechanisms. An architectural mechanism is a 'structure whereby objects collaborate to provide some behavior that satisfies a requirement of the problem...

  12. Regularization of Hamilton-Lagrangian guiding center theories

    International Nuclear Information System (INIS)

    Correa-Restrepo, D.; Wimmel, H.K.

    1985-04-01

    The Hamilton-Lagrangian guiding-center (G.C.) theories of Littlejohn, Wimmel, and Pfirsch show a singularity for B-fields with non-vanishing parallel curl at a critical value of vsub(parallel), which complicates applications. The singularity is related to a sudden breakdown, at a critical vsub(parallel), of gyration in the exact particle mechanics. While the latter is a real effect, the G.C. singularity can be removed. To this end a regularization method is defined that preserves the Hamilton-Lagrangian structure and the conservation theorems. For demonstration this method is applied to the standard G.C. theory (without polarization drift). Liouville's theorem and G.C. kinetic equations are also derived in regularized form. The method could equally well be applied to the case with polarization drift and to relativistic G.C. theory. (orig.)

  13. Persistent low-grade inflammation and regular exercise

    DEFF Research Database (Denmark)

    Åström, Maj-brit; Feigh, Michael; Pedersen, Bente Klarlund

    2010-01-01

    against all of these diseases and recent evidence suggests that the protective effect of exercise may to some extent be ascribed to an anti-inflammatory effect of regular exercise. Visceral adiposity contributes to systemic inflammation and is independently associated with the occurrence of CVD, type 2...... diabetes and dementia. We suggest that the anti-inflammatory effects of exercise may be mediated via a long-term effect of exercise leading to a reduction in visceral fat mass and/or by induction of anti-inflammatory cytokines with each bout of exercise.......Persistent low-grade systemic inflammation is a feature of chronic diseases such as cardiovascular disease (CVD), type 2 diabetes and dementia and evidence exists that inflammation is a causal factor in the development of insulin resistance and atherosclerosis. Regular exercise offers protection...

  14. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  15. Regularized semiclassical limits: Linear flows with infinite Lyapunov exponents

    KAUST Repository

    Athanassoulis, Agissilaos; Katsaounis, Theodoros; Kyza, Irene

    2016-01-01

    Semiclassical asymptotics for Schrödinger equations with non-smooth potentials give rise to ill-posed formal semiclassical limits. These problems have attracted a lot of attention in the last few years, as a proxy for the treatment of eigenvalue crossings, i.e. general systems. It has recently been shown that the semiclassical limit for conical singularities is in fact well-posed, as long as the Wigner measure (WM) stays away from singular saddle points. In this work we develop a family of refined semiclassical estimates, and use them to derive regularized transport equations for saddle points with infinite Lyapunov exponents, extending the aforementioned recent results. In the process we answer a related question posed by P.L. Lions and T. Paul in 1993. If we consider more singular potentials, our rigorous estimates break down. To investigate whether conical saddle points, such as -|x|, admit a regularized transport asymptotic approximation, we employ a numerical solver based on posteriori error control. Thus rigorous upper bounds for the asymptotic error in concrete problems are generated. In particular, specific phenomena which render invalid any regularized transport for -|x| are identified and quantified. In that sense our rigorous results are sharp. Finally, we use our findings to formulate a precise conjecture for the condition under which conical saddle points admit a regularized transport solution for the WM. © 2016 International Press.

  16. Regularized semiclassical limits: Linear flows with infinite Lyapunov exponents

    KAUST Repository

    Athanassoulis, Agissilaos

    2016-08-30

    Semiclassical asymptotics for Schrödinger equations with non-smooth potentials give rise to ill-posed formal semiclassical limits. These problems have attracted a lot of attention in the last few years, as a proxy for the treatment of eigenvalue crossings, i.e. general systems. It has recently been shown that the semiclassical limit for conical singularities is in fact well-posed, as long as the Wigner measure (WM) stays away from singular saddle points. In this work we develop a family of refined semiclassical estimates, and use them to derive regularized transport equations for saddle points with infinite Lyapunov exponents, extending the aforementioned recent results. In the process we answer a related question posed by P.L. Lions and T. Paul in 1993. If we consider more singular potentials, our rigorous estimates break down. To investigate whether conical saddle points, such as -|x|, admit a regularized transport asymptotic approximation, we employ a numerical solver based on posteriori error control. Thus rigorous upper bounds for the asymptotic error in concrete problems are generated. In particular, specific phenomena which render invalid any regularized transport for -|x| are identified and quantified. In that sense our rigorous results are sharp. Finally, we use our findings to formulate a precise conjecture for the condition under which conical saddle points admit a regularized transport solution for the WM. © 2016 International Press.

  17. Object attributes combine additively in visual search.

    Science.gov (United States)

    Pramod, R T; Arun, S P

    2016-01-01

    We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes.

  18. Special needs students in regular education: do they affect their classmates?

    NARCIS (Netherlands)

    Ruijs, N.

    2014-01-01

    The impact on regular students is a prominent concern in the inclusive education debate. Recent studies find that the presence of students with special educational needs harms the achievement of regular students. This study investigates inclusive practices in Dutch primary and secondary education,

  19. Safety significance of ATR passive safety response attributes

    International Nuclear Information System (INIS)

    Atkinson, S.A.

    1990-01-01

    The Advanced Test Reactor (ATR) at the Idaho National Engineering Laboratory was designed with some passive safety response attributes which contribute to the safety of the facility. The three passive safety attributes being evaluated in the paper are: 1) In-core and in-vessel natural convection cooling, 2) a passive heat sink capability of the ATR primary coolant system (PCS) for the transfer of decay power from the uninsulated piping to the confinement, and 3) gravity feed of emergency coolant makeup. The safety significance of the ATR passive safety response attributes is that the reactor can passively respond to most transients, given a reactor scram, to provide adequate decay power removal and a significant time for operator action should the normal active heat removal systems and their backup systems both fail. The ATR Interim Level 1 Probabilistic Risk Assessment (PRA) models and results were used to evaluate the significance to ATR fuel damage frequency (or probability) of the above three passive response attributes. The results of the evaluation indicate that the first attribute is a major safety characteristic of the ATR. The second attribute has a noticeable but only minor safety significance. The third attribute has no significant influence on the ATR firewater injection system (emergency coolant system)

  20. Stochastic dynamic modeling of regular and slow earthquakes

    Science.gov (United States)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal