WorldWideScience

Sample records for constrained factor analysis

  1. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    Science.gov (United States)

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  2. Client's Constraining Factors to Construction Project Management

    African Journals Online (AJOL)

    factors as a significant system that constrains project management success of public and ... finance for the project and prompt payment for work executed; clients .... consideration of the loading patterns of these variables, the major factor is ...

  3. Factorization of Constrained Energy K-Network Reliability with Perfect Nodes

    OpenAIRE

    Burgos, Juan Manuel

    2013-01-01

    This paper proves a new general K-network constrained energy reliability global factorization theorem. As in the unconstrained case, beside its theoretical mathematical importance the theorem shows how to do parallel processing in exact network constrained energy reliability calculations in order to reduce the processing time of this NP-hard problem. Followed by a new simple factorization formula for its calculation, we propose a new definition of constrained energy network reliability motiva...

  4. Factors constraining accessibility and usage of information among ...

    African Journals Online (AJOL)

    Various factors may negatively impact on information acquisition and utilisation. To improve understanding of the determinants of information acquisition and utilisation, this study investigated the factors constraining accessibility and usage of poultry management information in three rural districts of Tanzania. The findings ...

  5. Public health nutrition workforce development in seven European countries: constraining and enabling factors.

    Science.gov (United States)

    Kugelberg, Susanna; Jonsdottir, Svandis; Faxelid, Elisabeth; Jönsson, Kristina; Fox, Ann; Thorsdottir, Inga; Yngve, Agneta

    2012-11-01

    Little is known about current public health nutrition workforce development in Europe. The present study aimed to understand constraining and enabling factors to workforce development in seven European countries. A qualitative study comprised of semi-structured face-to-face interviews was conducted and content analysis was used to analyse the transcribed interview data. The study was carried out in Finland, Iceland, Ireland, Slovenia, Spain, Sweden and the UK. Sixty key informants participated in the study. There are constraining and enabling factors for public health nutrition workforce development. The main constraining factors relate to the lack of a supportive policy environment, fragmented organizational structures and a workforce that is not cohesive enough to implement public health nutrition strategic initiatives. Enabling factors were identified as the presence of skilled and dedicated individuals who assume roles as leaders and change agents. There is a need to strengthen coordination between policy and implementation of programmes which may operate across the national to local spectrum. Public health organizations are advised to further define aims and objectives relevant to public health nutrition. Leaders and agents of change will play important roles in fostering intersectorial partnerships, advocating for policy change, establishing professional competencies and developing education and training programmes.

  6. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  7. Client's constraining factors to construction project management ...

    African Journals Online (AJOL)

    This study analyzed client's related factors that constrain project management success of public and private sector construction in Nigeria. Issues that concern clients in any project can not be undermined as they are the owners and the initiators of project proposals. It is assumed that success, failure or abandonment of ...

  8. Institutional and Actor-Oriented Factors Constraining Expert-Based Forest Information Exchange in Europe: A Policy Analysis from an Actor-Centred Institutionalist Approach

    Directory of Open Access Journals (Sweden)

    Tanya Baycheva-Merger

    2018-03-01

    Full Text Available Adequate and accessible expert-based forest information has become increasingly in demand for effective decisions and informed policies in the forest and forest-related sectors in Europe. Such accessibility requires a collaborative environment and constant information exchange between various actors at different levels and across sectors. However, information exchange in complex policy environments is challenging, and is often constrained by various institutional, actor-oriented, and technical factors. In forest policy research, no study has yet attempted to simultaneously account for these multiple factors influencing expert-based forest information exchange. By employing a policy analysis from an actor-centred institutionalist perspective, this paper aims to provide an overview of the most salient institutional and actor-oriented factors that are perceived as constraining forest information exchange at the national level across European countries. We employ an exploratory research approach, and utilise both qualitative and quantitative methods to analyse our data. The data was collected through a semi-structured survey targeted at forest and forest-related composite actors in 21 European countries. The results revealed that expert-based forest information exchange is constrained by a number of compound and closely interlinked institutional and actor-oriented factors, reflecting the complex interplay of institutions and actors at the national level. The most salient institutional factors that stand out include restrictive or ambiguous data protection policies, inter-organisational information arrangements, different organisational cultures, and a lack of incentives. Forest information exchange becomes even more complex when actors are confronted with actor-oriented factors such as issues of distrust, diverging preferences and perceptions, intellectual property rights, and technical capabilities. We conclude that expert-based forest information

  9. A retrospective content analysis of studies on factors constraining the implementation of health sector reform in Ghana.

    Science.gov (United States)

    Sakyi, E Kojo

    2008-01-01

    Ghana has undertaken many public service management reforms in the past two decades. But the implementation of the reforms has been constrained by many factors. This paper undertakes a retrospective study of research works on the challenges to the implementation of reforms in the public health sector. It points out that most of the studies identified: (1) centralised, weak and fragmented management system; (2) poor implementation strategy; (3) lack of motivation; (4) weak institutional framework; (5) lack of financial and human resources and (6) staff attitude and behaviour as the major causes of ineffective reform implementation. The analysis further revealed that quite a number of crucial factors obstructing reform implementation which are particularly internal to the health system have either not been thoroughly studied or overlooked. The analysis identified lack of leadership; weak communication and consultation; lack of stakeholder participation, corruption and unethical professional behaviour as some of the missing variables in the literature. The study, therefore, indicated that there are gaps in the literature that needed to be filled through rigorous reform evaluation based on empirical research particularly at district, sub-district and community levels. It further suggested that future research should be concerned with the effects of both systems and structures and behavioural factors on reform implementation.

  10. Multiplicative algorithms for constrained non-negative matrix factorization

    KAUST Repository

    Peng, Chengbin

    2012-12-01

    Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc. In this paper, we provide an algorithm that allows the factorization to have linear or approximatly linear constraints with respect to each factor. We prove that if the constraint function is linear, algorithms within our multiplicative framework will converge. This theory supports a large variety of equality and inequality constraints, and can facilitate application of NMF to a much larger domain. Taking the recommender system as an example, we demonstrate how a specialized weighted and constrained NMF algorithm can be developed to fit exactly for the problem, and the tests justify that our constraints improve the performance for both weighted and unweighted NMF algorithms under several different metrics. In particular, on the Movielens data with 94% of items, the Constrained NMF improves recall rate 3% compared to SVD50 and 45% compared to SVD150, which were reported as the best two in the top-N metric. © 2012 IEEE.

  11. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    Science.gov (United States)

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  12. Characteristics and critical success factors for implementing problem-based learning in a human resource-constrained country.

    Science.gov (United States)

    Giva, Karen R N; Duma, Sinegugu E

    2015-08-31

    Problem-based learning (PBL) was introduced in Malawi in 2002 in order to improve the nursing education system and respond to the acute nursing human resources shortage. However, its implementation has been very slow throughout the country. The objectives of the study were to explore and describe the goals that were identified by the college to facilitate the implementation of PBL, the resources of the organisation that facilitated the implementation of PBL, the factors related to sources of students that facilitated the implementation of PBL, and the influence of the external system of the organisation on facilitating the implementation of PBL, and to identify critical success factors that could guide the implementation of PBL in nursing education in Malawi. This is an ethnographic, exploratory and descriptive qualitative case study. Purposive sampling was employed to select the nursing college, participants and documents for review.Three data collection methods, including semi-structured interviews, participant observation and document reviews, were used to collect data. The four steps of thematic analysis were used to analyse data from all three sources. Four themes and related subthemes emerged from the triangulated data sources. The first three themes and their subthemes are related to the characteristics related to successful implementation of PBL in a human resource-constrained nursing college, whilst the last theme is related to critical success factors that contribute to successful implementation of PBL in a human resource-constrained country like Malawi. This article shows that implementation of PBL is possible in a human resource-constrained country if there is political commitment and support.

  13. The visualization and analysis of urban facility pois using network kernel density estimation constrained by multi-factors

    Directory of Open Access Journals (Sweden)

    Wenhao Yu

    Full Text Available The urban facility, one of the most important service providers is usually represented by sets of points in GIS applications using POI (Point of Interest model associated with certain human social activities. The knowledge about distribution intensity and pattern of facility POIs is of great significance in spatial analysis, including urban planning, business location choosing and social recommendations. Kernel Density Estimation (KDE, an efficient spatial statistics tool for facilitating the processes above, plays an important role in spatial density evaluation, because KDE method considers the decay impact of services and allows the enrichment of the information from a very simple input scatter plot to a smooth output density surface. However, the traditional KDE is mainly based on the Euclidean distance, ignoring the fact that in urban street network the service function of POI is carried out over a network-constrained structure, rather than in a Euclidean continuous space. Aiming at this question, this study proposes a computational method of KDE on a network and adopts a new visualization method by using 3-D "wall" surface. Some real conditional factors are also taken into account in this study, such as traffic capacity, road direction and facility difference. In practical works the proposed method is implemented in real POI data in Shenzhen city, China to depict the distribution characteristic of services under impacts of multi-factors.

  14. Uniqueness conditions for constrained three-way factor decompositions with linearly dependent loadings

    NARCIS (Netherlands)

    Stegeman, Alwin; De Almeida, Andre L. F.

    2009-01-01

    In this paper, we derive uniqueness conditions for a constrained version of the parallel factor (Parafac) decomposition, also known as canonical decomposition (Candecomp). Candecomp/Parafac (CP) decomposes a three-way array into a prespecified number of outer product arrays. The constraint is that

  15. Supporting and Constraining Factors in the Development of University Teaching Experienced by Teachers

    Science.gov (United States)

    Jääskelä, Päivikki; Häkkinen, Päivi; Rasku-Puttonen, Helena

    2017-01-01

    Higher education calls for reform, but deeper knowledge about the prerequisites for teaching development and pedagogical change is missing. In this study, 51 university teachers' experiences of supportive or constraining factors in teaching development were investigated in the context of Finland's multidisciplinary network. The findings reveal…

  16. Structure-constrained sparse canonical correlation analysis with an application to microbiome data analysis.

    Science.gov (United States)

    Chen, Jun; Bushman, Frederic D; Lewis, James D; Wu, Gary D; Li, Hongzhe

    2013-04-01

    Motivated by studying the association between nutrient intake and human gut microbiome composition, we developed a method for structure-constrained sparse canonical correlation analysis (ssCCA) in a high-dimensional setting. ssCCA takes into account the phylogenetic relationships among bacteria, which provides important prior knowledge on evolutionary relationships among bacterial taxa. Our ssCCA formulation utilizes a phylogenetic structure-constrained penalty function to impose certain smoothness on the linear coefficients according to the phylogenetic relationships among the taxa. An efficient coordinate descent algorithm is developed for optimization. A human gut microbiome data set is used to illustrate this method. Both simulations and real data applications show that ssCCA performs better than the standard sparse CCA in identifying meaningful variables when there are structures in the data.

  17. Dimensionally constrained energy confinement analysis of W7-AS data

    International Nuclear Information System (INIS)

    Dose, V.; Preuss, R.; Linden, W. von der

    1998-01-01

    A recently assembled W7-AS stellarator database has been subject to dimensionally constrained confinement analysis. The analysis employs Bayesian inference. Dimensional information is taken from the Connor-Taylor (CT) similarity transformation theory, which provides six possible physical scenarios with associated dimensional conditions. Bayesian theory allows the calculations of the probability for each model and it is found that the present W7-AS data are most probably described by the collisionless high-β case. Probabilities for all models and the associated exponents of a power law scaling function are presented. (author)

  18. Constrained physical therapist practice: an ethical case analysis of recommending discharge placement from the acute care setting.

    Science.gov (United States)

    Nalette, Ernest

    2010-06-01

    Constrained practice is routinely encountered by physical therapists and may limit the physical therapist's primary moral responsibility-which is to help the patient to become well again. Ethical practice under such conditions requires a certain moral character of the practitioner. The purposes of this article are: (1) to provide an ethical analysis of a typical patient case of constrained clinical practice, (2) to discuss the moral implications of constrained clinical practice, and (3) to identify key moral principles and virtues fostering ethical physical therapist practice. The case represents a common scenario of discharge planning in acute care health facilities in the northeastern United States. An applied ethics approach was used for case analysis. The decision following analysis of the dilemma was to provide the needed care to the patient as required by compassion, professional ethical standards, and organizational mission. Constrained clinical practice creates a moral dilemma for physical therapists. Being responsive to the patient's needs moves the physical therapist's practice toward the professional ideal of helping vulnerable patients become well again. Meeting the patient's needs is a professional requirement of the physical therapist as moral agent. Acting otherwise requires an alternative position be ethically justified based on systematic analysis of a particular case. Skepticism of status quo practices is required to modify conventional individual, organizational, and societal practices toward meeting the patient's best interest.

  19. FACTORS CONSTRAINING THE PRODUCTION AND MARKETING OF PAWPAW (Carica papaya) IN EKITI STATE, SOUTHWESTERN NIGERIA

    OpenAIRE

    Agbowuro G.O

    2012-01-01

    The objective of this work is to identify and examine major factors constraining pawpaw production and marketing in Ekiti State, Southwestern Nigeria. Questionnaire schedule and personal interviews were used to collect data from ten Local Government Areas in the state. A total of 76 pawpaw farmers were randomly interviewed for this study. The study identified poor patronage in the market, poor marketing system, inadequate capital, poor price, inadequate extension services, poor transportation...

  20. Effects of a Cooperative Learning Strategy on the Effectiveness of Physical Fitness Teaching and Constraining Factors

    Directory of Open Access Journals (Sweden)

    Tsui-Er Lee

    2014-01-01

    Full Text Available The effects of cooperative learning and traditional learning on the effectiveness and constraining factors of physical fitness teaching under various teaching conditions were studied. Sixty female students in Grades 7–8 were sampled to evaluate their learning of health and physical education (PE according to the curriculum for Grades 1–9 in Taiwan. The data were quantitatively and qualitatively collected and analyzed. The overall physical fitness of the cooperative learning group exhibited substantial progress between the pretest and posttest, in which the differences in the sit-and-reach and bent-knee sit-up exercises achieved statistical significance. The performance of the cooperative learning group in the bent-knee sit-up and 800 m running exercises far exceeded that of the traditional learning group. Our qualitative data indicated that the number of people grouped before a cooperative learning session, effective administrative support, comprehensive teaching preparation, media reinforcement, constant feedback and introspection regarding cooperative learning strategies, and heterogeneous grouping are constraining factors for teaching PE by using cooperative learning strategies. Cooperative learning is considered an effective route for attaining physical fitness among students. PE teachers should consider providing extrinsic motivation for developing learning effectiveness.

  1. Analysis of multi cloud storage applications for resource constrained mobile devices

    Directory of Open Access Journals (Sweden)

    Rajeev Kumar Bedi

    2016-09-01

    Full Text Available Cloud storage, which can be a surrogate for all physical hardware storage devices, is a term which gives a reflection of an enormous advancement in engineering (Hung et al., 2012. However, there are many issues that need to be handled when accessing cloud storage on resource constrained mobile devices due to inherent limitations of mobile devices as limited storage capacity, processing power and battery backup (Yeo et al., 2014. There are many multi cloud storage applications available, which handle issues faced by single cloud storage applications. In this paper, we are providing analysis of different multi cloud storage applications developed for resource constrained mobile devices to check their performance on the basis of parameters as battery consumption, CPU usage, data usage and time consumed by using mobile phone device Sony Xperia ZL (smart phone on WiFi network. Lastly, conclusion and open research challenges in these multi cloud storage apps are discussed.

  2. The Smoothing Artifact of Spatially Constrained Canonical Correlation Analysis in Functional MRI

    Directory of Open Access Journals (Sweden)

    Dietmar Cordes

    2012-01-01

    Full Text Available A wide range of studies show the capacity of multivariate statistical methods for fMRI to improve mapping of brain activations in a noisy environment. An advanced method uses local canonical correlation analysis (CCA to encompass a group of neighboring voxels instead of looking at the single voxel time course. The value of a suitable test statistic is used as a measure of activation. It is customary to assign the value to the center voxel; however, this is a choice of convenience and without constraints introduces artifacts, especially in regions of strong localized activation. To compensate for these deficiencies, different spatial constraints in CCA have been introduced to enforce dominance of the center voxel. However, even if the dominance condition for the center voxel is satisfied, constrained CCA can still lead to a smoothing artifact, often called the “bleeding artifact of CCA”, in fMRI activation patterns. In this paper a new method is introduced to measure and correct for the smoothing artifact for constrained CCA methods. It is shown that constrained CCA methods corrected for the smoothing artifact lead to more plausible activation patterns in fMRI as shown using data from a motor task and a memory task.

  3. Constrained relationship agency as the risk factor for intimate ...

    African Journals Online (AJOL)

    We used structural equation modelling to identify and measure constrained relationship agency (CRA) as a latent variable, and then tested the hypothesis that CRA plays a significant role in the pathway between IPV and transactional sex. After controlling for CRA, receiving more material goods from a sexual partner was ...

  4. k-t PCA: temporally constrained k-t BLAST reconstruction using principal component analysis

    DEFF Research Database (Denmark)

    Pedersen, Henrik; Kozerke, Sebastian; Ringgaard, Steffen

    2009-01-01

    in applications exhibiting a broad range of temporal frequencies such as free-breathing myocardial perfusion imaging. We show that temporal basis functions calculated by subjecting the training data to principal component analysis (PCA) can be used to constrain the reconstruction such that the temporal resolution...... is improved. The presented method is called k-t PCA....

  5. Factors Constraining Local Food Crop Production in Indonesia: Experiences from Kulon Progo Regency, Yogyakarta Special Province

    Directory of Open Access Journals (Sweden)

    RADEN RIJANTA

    2013-01-01

    Full Text Available Local food crops are believed to be important alternatives in facing the problems of continuously growing price of food stuff worldwide. There has been a strong bias in national agricultural development policy towards the production of rice as staple food in Indonesia. Local food crops have been neglected in the agricultural development policy in the last 50 years, leading to the dependency on imported commodities and creating a vulnerability in the national food security. This paper aims at assessing the factors constraining local food production in Indonesia based on empirical experiences drawn from a research in Kulon Progo Regency, Yogyakarta Province. The government of Kulon Progo Regency has declared its commitment in the development of local food commodities as a part of its agricultural development policy, as it is mentioned in the long-term and medium-term development planning documents. There is also a head regency decree mandating the use of local food commodities in any official events organized by the government organisations. The research shows that there are at least six policy-related problems and nine technical factors constraining local food crops production in the regency. Some of the policy-related and structural factors hampering the production of local food crops consist of (1 long-term policy biases towards rice, (2 strong biases on rice diet in the community, (3 difficulties in linking policy to practices, (4 lack of information on availability of local food crops across the regency and (5 external threat from the readily available instant food on local market and (6 past contra-productive policy to the production of local food crops. The technical factors constraining local food production comprises (1 inferiority of the food stuff versus the instantly prepared food, (2 difficulty in preparation and risk of contagion of some crops, lack of technology for processing, (3 continuity of supply (some crops are seasonally

  6. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    Science.gov (United States)

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  7. Structure constrained semi-nonnegative matrix factorization for EEG-based motor imagery classification.

    Science.gov (United States)

    Lu, Na; Li, Tengfei; Pan, Jinjin; Ren, Xiaodong; Feng, Zuren; Miao, Hongyu

    2015-05-01

    Electroencephalogram (EEG) provides a non-invasive approach to measure the electrical activities of brain neurons and has long been employed for the development of brain-computer interface (BCI). For this purpose, various patterns/features of EEG data need to be extracted and associated with specific events like cue-paced motor imagery. However, this is a challenging task since EEG data are usually non-stationary time series with a low signal-to-noise ratio. In this study, we propose a novel method, called structure constrained semi-nonnegative matrix factorization (SCS-NMF), to extract the key patterns of EEG data in time domain by imposing the mean envelopes of event-related potentials (ERPs) as constraints on the semi-NMF procedure. The proposed method is applicable to general EEG time series, and the extracted temporal features by SCS-NMF can also be combined with other features in frequency domain to improve the performance of motor imagery classification. Real data experiments have been performed using the SCS-NMF approach for motor imagery classification, and the results clearly suggest the superiority of the proposed method. Comparison experiments have also been conducted. The compared methods include ICA, PCA, Semi-NMF, Wavelets, EMD and CSP, which further verified the effectivity of SCS-NMF. The SCS-NMF method could obtain better or competitive performance over the state of the art methods, which provides a novel solution for brain pattern analysis from the perspective of structure constraint. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Constraining dark energy with Hubble parameter measurements: an analysis including future redshift-drift observations

    International Nuclear Information System (INIS)

    Guo, Rui-Yun; Zhang, Xin

    2016-01-01

    The nature of dark energy affects the Hubble expansion rate (namely, the expansion history) H(z) by an integral over w(z). However, the usual observables are the luminosity distances or the angular diameter distances, which measure the distance.redshift relation. Actually, the property of dark energy affects the distances (and the growth factor) by a further integration over functions of H(z). Thus, the direct measurements of the Hubble parameter H(z) at different redshifts are of great importance for constraining the properties of dark energy. In this paper, we show how the typical dark energy models, for example, the ΛCDM, wCDM, CPL, and holographic dark energy models, can be constrained by the current direct measurements of H(z) (31 data used in total in this paper, covering the redshift range of z @ element of [0.07, 2.34]). In fact, the future redshift-drift observations (also referred to as the Sandage-Loeb test) can also directly measure H(z) at higher redshifts, covering the range of z @ element of [2, 5]. We thus discuss what role the redshift-drift observations can play in constraining dark energy with the Hubble parameter measurements. We show that the constraints on dark energy can be improved greatly with the H(z) data from only a 10-year observation of redshift drift. (orig.)

  9. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  10. Assessing Heterogeneity for Factor Analysis Model with Continuous and Ordinal Outcomes

    Directory of Open Access Journals (Sweden)

    Ye-Mao Xia

    2016-01-01

    Full Text Available Factor analysis models with continuous and ordinal responses are a useful tool for assessing relations between the latent variables and mixed observed responses. These models have been successfully applied to many different fields, including behavioral, educational, and social-psychological sciences. However, within the Bayesian analysis framework, most developments are constrained within parametric families, of which the particular distributions are specified for the parameters of interest. This leads to difficulty in dealing with outliers and/or distribution deviations. In this paper, we propose a Bayesian semiparametric modeling for factor analysis model with continuous and ordinal variables. A truncated stick-breaking prior is used to model the distributions of the intercept and/or covariance structural parameters. Bayesian posterior analysis is carried out through the simulation-based method. Blocked Gibbs sampler is implemented to draw observations from the complicated posterior. For model selection, the logarithm of pseudomarginal likelihood is developed to compare the competing models. Empirical results are presented to illustrate the application of the methodology.

  11. Constrained independent component analysis approach to nonobtrusive pulse rate measurements

    Science.gov (United States)

    Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  12. Hyperbolicity and constrained evolution in linearized gravity

    International Nuclear Information System (INIS)

    Matzner, Richard A.

    2005-01-01

    Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them (unconstrained evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly 'more correct' to introduce a scheme which actively maintains the constraints by solution (constrained evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with the highest derivative order in the equations, linearization loses no essential details.) We then give explicit analytical construction of the effect of initial data setting and constrained evolution for linearized gravitational waves. While this is clearly a toy model with regard to constrained evolution, certain interesting features are found which have relevance to the full nonlinear Einstein equations

  13. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid; Barton, Michael

    2015-01-01

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  14. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid

    2015-12-31

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  15. Smallholder farmers’ perceptions of factors that constrain the competitiveness of a formal organic crop supply chain in KwaZulu Natal, South Africa

    Directory of Open Access Journals (Sweden)

    MAG Darroch

    2014-05-01

    Full Text Available The 48 organic-certified members of the Ezemvelo Farmers’ Organisation in KwaZulu-Natal were surveyed during October-November 2004 to assess what factors they perceive constrain the competitiveness of a formal supply chain that markets their amadumbe, potatoes and sweet potatoes. They identified uncertain climate, tractor not available when needed, delays in payments for crops sent to the pack-house, lack of cash and credit to finance inputs, and more work than the family can handle as the current top five constraints. Principal Component Analysis further identified three valid institutional dimensions of perceived constraints  and two valid farm-level dimensions. Potential solutions to better manage these constraints are discussed, including the need for the farmers to renegotiate the terms of their incomplete business contract with the pack-house agent.

  16. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian [Univ. of Macau, Macau (China)

    2012-10-15

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics.

  17. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    International Nuclear Information System (INIS)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian

    2012-01-01

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics

  18. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  19. An algorithm for mass matrix calculation of internally constrained molecular geometries

    International Nuclear Information System (INIS)

    Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz

    2008-01-01

    Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model

  20. An algorithm for mass matrix calculation of internally constrained molecular geometries.

    Science.gov (United States)

    Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz

    2008-01-28

    Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model.

  1. Self-constrained inversion of potential fields

    Science.gov (United States)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  2. Assessing Joint Service Opportunities through a Consideration of the Motivating and Constraining Factors

    Directory of Open Access Journals (Sweden)

    Mark Borman

    2006-11-01

    Full Text Available In a wide range of industries services are increasingly being developed, or evolving, to support groups of organisations. Not all such joint service initiatives though have been successful. The paper aims to highlight potential issues that need to be addressed when investigating the introduction of a joint service by identifying the motivators and constraints. The approach outlined draws upon network externality theory to provide the motivation for a joint service, and resource based and dependency theories to highlight the constraining factors. Three instances of joint services – in the Banking, Telecommunications and Travel sectors – are subsequently examined. It is concluded that as well as providing externality benefits joint service initiatives can also improve the terms of access to a service – in particular through realising economies of scale. Furthermore it would appear that organisations will have to think carefully about the best way to create, structure and manage a joint service initiative – including who to partner with – given their own particular circumstances, as multiple alternative approaches, with potentially differing ramifications, are available.

  3. Sustaining Lesson Study: Resources and Factors that Support and Constrain Mathematics Teachers' Ability to Continue After the Grant Ends

    Science.gov (United States)

    Druken, Bridget Kinsella

    Lesson study, a teacher-led vehicle for inquiring into teacher practice through creating, enacting, and reflecting on collaboratively designed research lessons, has been shown to improve mathematics teacher practice in the United States, such as improving knowledge about mathematics, changing teacher practice, and developing communities of teachers. Though it has been described as a sustainable form of professional development, little research exists on what might support teachers in continuing to engage in lesson study after a grant ends. This qualitative and multi-case study investigates the sustainability of lesson study as mathematics teachers engage in a district scale-up lesson study professional experience after participating in a three-year California Mathematics Science Partnership (CaMSP) grant to improve algebraic instruction. To do so, I first provide a description of material (e.g. curricular materials and time), human (attending district trainings and interacting with mathematics coaches), and social (qualities like trust, shared values, common goals, and expectations developed through relationships with others) resources present in the context of two school districts as reported by participants. I then describe practices of lesson study reported to have continued. I also report on teachers' conceptions of what it means to engage in lesson study. I conclude by describing how these results suggest factors that supported and constrained teachers' in continuing lesson study. To accomplish this work, I used qualitative methods of grounded theory informed by a modified sustainability framework on interview, survey, and case study data about teachers, principals, and Teachers on Special Assignment (TOSAs). Four cases were selected to show the varying levels of lesson study practices that continued past the conclusion of the grant. Analyses reveal varying levels of integration, linkage, and synergy among both formally and informally arranged groups of

  4. Cosmicflows Constrained Local UniversE Simulations

    Science.gov (United States)

    Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo

    2016-01-01

    This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, I.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.

  5. Chance constrained problems: penalty reformulation and performance of sample approximation technique

    Czech Academy of Sciences Publication Activity Database

    Branda, Martin

    2012-01-01

    Roč. 48, č. 1 (2012), s. 105-122 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z10750506 Keywords : chance constrained problems * penalty functions * asymptotic equivalence * sample approximation technique * investment problem Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.619, year: 2012 http://library.utia.cas.cz/separaty/2012/E/branda-chance constrained problems penalty reformulation and performance of sample approximation technique.pdf

  6. Dispersive analysis of the pion transition form factor

    Science.gov (United States)

    Hoferichter, M.; Kubis, B.; Leupold, S.; Niecknig, F.; Schneider, S. P.

    2014-11-01

    We analyze the pion transition form factor using dispersion theory. We calculate the singly-virtual form factor in the time-like region based on data for the cross section, generalizing previous studies on decays and scattering, and verify our result by comparing to data. We perform the analytic continuation to the space-like region, predicting the poorly-constrained space-like transition form factor below , and extract the slope of the form factor at vanishing momentum transfer . We derive the dispersive formalism necessary for the extension of these results to the doubly-virtual case, as required for the pion-pole contribution to hadronic light-by-light scattering in the anomalous magnetic moment of the muon.

  7. Similar goals, divergent motives. The enabling and constraining factors of Russia's capacity-based renewable energy support scheme

    International Nuclear Information System (INIS)

    Smeets, Niels

    2017-01-01

    In 2009, the Russian government set its first quantitative renewable energy target at 4.5% of the total electricity produced and consumed by 2020. In 2013, the Government launched its capacity-based renewable energy support scheme (CRESS), however, the expects it will merely add 0.3% to the current 0.67% share of renewables (Ministry of Energy, 2016c). This raises the question what factors might explain this implementation gap. On the basis of field research in Moscow, the article offers an in-depth policy analysis of resource-geographic, financial, institutional and ecologic enabling and constraining factors of Russia's CRESS between 2009 and 2015. To avoid the trap that policy intentions remain on paper, the entire policy cycle – from goal setting to implementation – has been covered. The article concludes that wind energy, which would have contributed the lion's share of new renewable energy capacity, lags behind, jeopardizing the quantitative renewable energy target. The depreciation of the rouble decreased return on investment, and the Local Content Requirement discouraged investors given the lack of Russian wind production facilities. Contrary to resource-geographic and financial expectations, solar projects have been commissioned more accurately, benefitting from access to major business groups and existing production facilities. - Highlights: • The support scheme is focused on the oversupplied integrated electricity market. • The scheme disregards the technical and economic potential in isolated areas. • The solar industry develops at the fastest rate, wind and small hydro lag behind. • Access to business groups and production facilities condition implementation. • The devaluation of the rouble necessitated a revision of the policy design.

  8. Constraining primordial non-Gaussianity with cosmological weak lensing: shear and flexion

    International Nuclear Information System (INIS)

    Fedeli, C.; Bartelmann, M.; Moscardini, L.

    2012-01-01

    We examine the cosmological constraining power of future large-scale weak lensing surveys on the model of the ESA planned mission Euclid, with particular reference to primordial non-Gaussianity. Our analysis considers several different estimators of the projected matter power spectrum, based on both shear and flexion. We review the covariance and Fisher matrix for cosmic shear and evaluate those for cosmic flexion and for the cross-correlation between the two. The bounds provided by cosmic shear alone are looser than previously estimated, mainly due to the reduced sky coverage and background number density of sources for the latest Euclid specifications. New constraints for the local bispectrum shape, marginalized over σ 8 , are at the level of Δf NL ∼ 100, with the precise value depending on the exact multipole range that is considered in the analysis. We consider three additional bispectrum shapes, for which the cosmic shear constraints range from Δf NL ∼ 340 (equilateral shape) up to Δf NL ∼ 500 (orthogonal shape). Also, constraints on the level of non-Gaussianity and on the amplitude of the matter power spectrum σ 8 are almost perfectly anti-correlated, except for the orthogonal bispectrum shape for which they are correlated. The competitiveness of cosmic flexion constraints against cosmic shear ones depends by and large on the galaxy intrinsic flexion noise, that is still virtually unconstrained. Adopting the very high value that has been occasionally used in the literature results in the flexion contribution being basically negligible with respect to the shear one, and for realistic configurations the former does not improve significantly the constraining power of the latter. Since the shear shot noise is white, while the flexion one decreases with decreasing scale, by considering high enough multipoles the two contributions have to become comparable. Extending the analysis up to l max = 20,000 cosmic flexion, while being still subdominant

  9. Hard exclusive meson production to constrain GPDs

    Energy Technology Data Exchange (ETDEWEB)

    Wolbeek, Johannes ter; Fischer, Horst; Gorzellik, Matthias; Gross, Arne; Joerg, Philipp; Koenigsmann, Kay; Malm, Pasquale; Regali, Christopher; Schmidt, Katharina; Sirtl, Stefan; Szameitat, Tobias [Physikalisches Institut, Albert-Ludwigs-Universitaet Freiburg, Freiburg im Breisgau (Germany); Collaboration: COMPASS Collaboration

    2014-07-01

    The concept of Generalized Parton Distributions (GPDs) combines the two-dimensional spatial information, given by form factors, with the longitudinal momentum information from the PDFs. Thus, GPDs provide a three-dimensional 'tomography' of the nucleon. Furthermore, according to Ji's sum rule, the GPDs H and E enable access to the total angular momenta of quarks, antiquarks and gluons. While H can be approached using electroproduction cross section, hard exclusive meson production off a transversely polarized target can help to constrain the GPD E. At the COMPASS experiment at CERN, two periods of data taking were performed in 2007 and 2010, using a longitudinally polarized 160 GeV/c muon beam and a transversely polarized NH{sub 3} target. This talk introduces the data analysis of the process μ + p → μ' + p' + V, and recent results are presented.

  10. Improved helicopter aeromechanical stability analysis using segmented constrained layer damping and hybrid optimization

    Science.gov (United States)

    Liu, Qiang; Chattopadhyay, Aditi

    2000-06-01

    Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.

  11. A Risk-Constrained Multi-Stage Decision Making Approach to the Architectural Analysis of Mars Missions

    Science.gov (United States)

    Kuwata, Yoshiaki; Pavone, Marco; Balaram, J. (Bob)

    2012-01-01

    This paper presents a novel risk-constrained multi-stage decision making approach to the architectural analysis of planetary rover missions. In particular, focusing on a 2018 Mars rover concept, which was considered as part of a potential Mars Sample Return campaign, we model the entry, descent, and landing (EDL) phase and the rover traverse phase as four sequential decision-making stages. The problem is to find a sequence of divert and driving maneuvers so that the rover drive is minimized and the probability of a mission failure (e.g., due to a failed landing) is below a user specified bound. By solving this problem for several different values of the model parameters (e.g., divert authority), this approach enables rigorous, accurate and systematic trade-offs for the EDL system vs. the mobility system, and, more in general, cross-domain trade-offs for the different phases of a space mission. The overall optimization problem can be seen as a chance-constrained dynamic programming problem, with the additional complexity that 1) in some stages the disturbances do not have any probabilistic characterization, and 2) the state space is extremely large (i.e, hundreds of millions of states for trade-offs with high-resolution Martian maps). To this purpose, we solve the problem by performing an unconventional combination of average and minimax cost analysis and by leveraging high efficient computation tools from the image processing community. Preliminary trade-off results are presented.

  12. Constrained evolution in numerical relativity

    Science.gov (United States)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  13. Analysis of the Spatial Variation of Network-Constrained Phenomena Represented by a Link Attribute Using a Hierarchical Bayesian Model

    Directory of Open Access Journals (Sweden)

    Zhensheng Wang

    2017-02-01

    Full Text Available The spatial variation of geographical phenomena is a classical problem in spatial data analysis and can provide insight into underlying processes. Traditional exploratory methods mostly depend on the planar distance assumption, but many spatial phenomena are constrained to a subset of Euclidean space. In this study, we apply a method based on a hierarchical Bayesian model to analyse the spatial variation of network-constrained phenomena represented by a link attribute in conjunction with two experiments based on a simplified hypothetical network and a complex road network in Shenzhen that includes 4212 urban facility points of interest (POIs for leisure activities. Then, the methods named local indicators of network-constrained clusters (LINCS are applied to explore local spatial patterns in the given network space. The proposed method is designed for phenomena that are represented by attribute values of network links and is capable of removing part of random variability resulting from small-sample estimation. The effects of spatial dependence and the base distribution are also considered in the proposed method, which could be applied in the fields of urban planning and safety research.

  14. Factors that influence m-health implementations in resource constrained areas in the developing world

    CSIR Research Space (South Africa)

    Ouma, S

    2011-11-01

    Full Text Available the primary healthcare levels in order to improve the delivery of services within various communities. They further provide the issues that the mhealth service providers should take into account when providing m-health solutions to the resource constrained...

  15. The impact of initialization procedures on unsupervised unmixing of hyperspectral imagery using the constrained positive matrix factorization

    Science.gov (United States)

    Masalmah, Yahya M.; Vélez-Reyes, Miguel

    2007-04-01

    The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.

  16. Generation and Analysis of Constrained Random Sampling Patterns

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2016-01-01

    Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...... indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed....

  17. Constrained mathematics evaluation in probabilistic logic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Arlin Cooper, J

    1998-06-01

    A challenging problem in mathematically processing uncertain operands is that constraints inherent in the problem definition can require computations that are difficult to implement. Examples of possible constraints are that the sum of the probabilities of partitioned possible outcomes must be one, and repeated appearances of the same variable must all have the identical value. The latter, called the 'repeated variable problem', will be addressed in this paper in order to show how interval-based probabilistic evaluation of Boolean logic expressions, such as those describing the outcomes of fault trees and event trees, can be facilitated in a way that can be readily implemented in software. We will illustrate techniques that can be used to transform complex constrained problems into trivial problems in most tree logic expressions, and into tractable problems in most other cases.

  18. Constraining primordial non-Gaussianity with cosmological weak lensing: shear and flexion

    Energy Technology Data Exchange (ETDEWEB)

    Fedeli, C. [Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL 32611-2055 (United States); Bartelmann, M. [Zentrum für Astronomie, Universität Heidelberg, Albert-Überle-Straße 2, 69120 Heidelberg (Germany); Moscardini, L., E-mail: cosimo.fedeli@astro.ufl.edu, E-mail: bartelmann@uni-heidelberg.de, E-mail: lauro.moscardini@unibo.it [Dipartimento di Astronomia, Università di Bologna, Via Ranzani 1, 40127 Bologna (Italy)

    2012-10-01

    We examine the cosmological constraining power of future large-scale weak lensing surveys on the model of the ESA planned mission Euclid, with particular reference to primordial non-Gaussianity. Our analysis considers several different estimators of the projected matter power spectrum, based on both shear and flexion. We review the covariance and Fisher matrix for cosmic shear and evaluate those for cosmic flexion and for the cross-correlation between the two. The bounds provided by cosmic shear alone are looser than previously estimated, mainly due to the reduced sky coverage and background number density of sources for the latest Euclid specifications. New constraints for the local bispectrum shape, marginalized over σ{sub 8}, are at the level of Δf{sub NL} ∼ 100, with the precise value depending on the exact multipole range that is considered in the analysis. We consider three additional bispectrum shapes, for which the cosmic shear constraints range from Δf{sub NL} ∼ 340 (equilateral shape) up to Δf{sub NL} ∼ 500 (orthogonal shape). Also, constraints on the level of non-Gaussianity and on the amplitude of the matter power spectrum σ{sub 8} are almost perfectly anti-correlated, except for the orthogonal bispectrum shape for which they are correlated. The competitiveness of cosmic flexion constraints against cosmic shear ones depends by and large on the galaxy intrinsic flexion noise, that is still virtually unconstrained. Adopting the very high value that has been occasionally used in the literature results in the flexion contribution being basically negligible with respect to the shear one, and for realistic configurations the former does not improve significantly the constraining power of the latter. Since the shear shot noise is white, while the flexion one decreases with decreasing scale, by considering high enough multipoles the two contributions have to become comparable. Extending the analysis up to l{sub max} = 20,000 cosmic flexion, while

  19. Comparison of phase-constrained parallel MRI approaches: Analogies and differences.

    Science.gov (United States)

    Blaimer, Martin; Heim, Marius; Neumann, Daniel; Jakob, Peter M; Kannengiesser, Stephan; Breuer, Felix A

    2016-03-01

    Phase-constrained parallel MRI approaches have the potential for significantly improving the image quality of accelerated MRI scans. The purpose of this study was to investigate the properties of two different phase-constrained parallel MRI formulations, namely the standard phase-constrained approach and the virtual conjugate coil (VCC) concept utilizing conjugate k-space symmetry. Both formulations were combined with image-domain algorithms (SENSE) and a mathematical analysis was performed. Furthermore, the VCC concept was combined with k-space algorithms (GRAPPA and ESPIRiT) for image reconstruction. In vivo experiments were conducted to illustrate analogies and differences between the individual methods. Furthermore, a simple method of improving the signal-to-noise ratio by modifying the sampling scheme was implemented. For SENSE, the VCC concept was mathematically equivalent to the standard phase-constrained formulation and therefore yielded identical results. In conjunction with k-space algorithms, the VCC concept provided more robust results when only a limited amount of calibration data were available. Additionally, VCC-GRAPPA reconstructed images provided spatial phase information with full resolution. Although both phase-constrained parallel MRI formulations are very similar conceptually, there exist important differences between image-domain and k-space domain reconstructions regarding the calibration robustness and the availability of high-resolution phase information. © 2015 Wiley Periodicals, Inc.

  20. Estimation of physiological parameters using knowledge-based factor analysis of dynamic nuclear medicine image sequences

    International Nuclear Information System (INIS)

    Yap, J.T.; Chen, C.T.; Cooper, M.

    1995-01-01

    The authors have previously developed a knowledge-based method of factor analysis to analyze dynamic nuclear medicine image sequences. In this paper, the authors analyze dynamic PET cerebral glucose metabolism and neuroreceptor binding studies. These methods have shown the ability to reduce the dimensionality of the data, enhance the image quality of the sequence, and generate meaningful functional images and their corresponding physiological time functions. The new information produced by the factor analysis has now been used to improve the estimation of various physiological parameters. A principal component analysis (PCA) is first performed to identify statistically significant temporal variations and remove the uncorrelated variations (noise) due to Poisson counting statistics. The statistically significant principal components are then used to reconstruct a noise-reduced image sequence as well as provide an initial solution for the factor analysis. Prior knowledge such as the compartmental models or the requirement of positivity and simple structure can be used to constrain the analysis. These constraints are used to rotate the factors to the most physically and physiologically realistic solution. The final result is a small number of time functions (factors) representing the underlying physiological processes and their associated weighting images representing the spatial localization of these functions. Estimation of physiological parameters can then be performed using the noise-reduced image sequence generated from the statistically significant PCs and/or the final factor images and time functions. These results are compared to the parameter estimation using standard methods and the original raw image sequences. Graphical analysis was performed at the pixel level to generate comparable parametric images of the slope and intercept (influx constant and distribution volume)

  1. Foundations of factor analysis

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Introduction Factor Analysis and Structural Theories Brief History of Factor Analysis as a Linear Model Example of Factor AnalysisMathematical Foundations for Factor Analysis Introduction Scalar AlgebraVectorsMatrix AlgebraDeterminants Treatment of Variables as Vectors Maxima and Minima of FunctionsComposite Variables and Linear Transformations Introduction Composite Variables Unweighted Composite VariablesDifferentially Weighted Composites Matrix EquationsMulti

  2. Exploring Constrained Creative Communication

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2017-01-01

    Creative collaboration via online tools offers a less ‘media rich’ exchange of information between participants than face-to-face collaboration. The participants’ freedom to communicate is restricted in means of communication, and rectified in terms of possibilities offered in the interface. How do...... these constrains influence the creative process and the outcome? In order to isolate the communication problem from the interface- and technology problem, we examine via a design game the creative communication on an open-ended task in a highly constrained setting, a design game. Via an experiment the relation...... between communicative constrains and participants’ perception of dialogue and creativity is examined. Four batches of students preparing for forming semester project groups were conducted and documented. Students were asked to create an unspecified object without any exchange of communication except...

  3. KINETIC CONSEQUENCES OF CONSTRAINING RUNNING BEHAVIOR

    Directory of Open Access Journals (Sweden)

    John A. Mercer

    2005-06-01

    Full Text Available It is known that impact forces increase with running velocity as well as when stride length increases. Since stride length naturally changes with changes in submaximal running velocity, it was not clear which factor, running velocity or stride length, played a critical role in determining impact characteristics. The aim of the study was to investigate whether or not stride length influences the relationship between running velocity and impact characteristics. Eight volunteers (mass=72.4 ± 8.9 kg; height = 1.7 ± 0.1 m; age = 25 ± 3.4 years completed two running conditions: preferred stride length (PSL and stride length constrained at 2.5 m (SL2.5. During each condition, participants ran at a variety of speeds with the intent that the range of speeds would be similar between conditions. During PSL, participants were given no instructions regarding stride length. During SL2.5, participants were required to strike targets placed on the floor that resulted in a stride length of 2.5 m. Ground reaction forces were recorded (1080 Hz as well as leg and head accelerations (uni-axial accelerometers. Impact force and impact attenuation (calculated as the ratio of head and leg impact accelerations were recorded for each running trial. Scatter plots were generated plotting each parameter against running velocity. Lines of best fit were calculated with the slopes recorded for analysis. The slopes were compared between conditions using paired t-tests. Data from two subjects were dropped from analysis since the velocity ranges were not similar between conditions resulting in the analysis of six subjects. The slope of impact force vs. velocity relationship was different between conditions (PSL: 0.178 ± 0.16 BW/m·s-1; SL2.5: -0.003 ± 0.14 BW/m·s-1; p < 0.05. The slope of the impact attenuation vs. velocity relationship was different between conditions (PSL: 5.12 ± 2.88 %/m·s-1; SL2.5: 1.39 ± 1.51 %/m·s-1; p < 0.05. Stride length was an important factor

  4. Regression and kriging analysis for grid power factor estimation

    Directory of Open Access Journals (Sweden)

    Rajesh Guntaka

    2014-12-01

    Full Text Available The measurement of power factor (PF in electrical utility grids is a mainstay of load balancing and is also a critical element of transmission and distribution efficiency. The measurement of PF dates back to the earliest periods of electrical power distribution to public grids. In the wide-area distribution grid, measurement of current waveforms is trivial and may be accomplished at any point in the grid using a current tap transformer. However, voltage measurement requires reference to ground and so is more problematic and measurements are normally constrained to points that have ready and easy access to a ground source. We present two mathematical analysis methods based on kriging and linear least square estimation (LLSE (regression to derive PF at nodes with unknown voltages that are within a perimeter of sample nodes with ground reference across a selected power grid. Our results indicate an error average of 1.884% that is within acceptable tolerances for PF measurements that are used in load balancing tasks.

  5. Robust and Efficient Constrained DFT Molecular Dynamics Approach for Biochemical Modeling

    Czech Academy of Sciences Publication Activity Database

    Řezáč, Jan; Levy, B.; Demachy, I.; de la Lande, A.

    2012-01-01

    Roč. 8, č. 2 (2012), s. 418-427 ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : constrained density functional the ory * electron transfer * density fitting Subject RIV: CF - Physical ; The oretical Chemistry Impact factor: 5.389, year: 2012

  6. Groundwater availability as constrained by hydrogeology and environmental flows.

    Science.gov (United States)

    Watson, Katelyn A; Mayer, Alex S; Reeves, Howard W

    2014-01-01

    Groundwater pumping from aquifers in hydraulic connection with nearby streams has the potential to cause adverse impacts by decreasing flows to levels below those necessary to maintain aquatic ecosystems. The recent passage of the Great Lakes-St. Lawrence River Basin Water Resources Compact has brought attention to this issue in the Great Lakes region. In particular, the legislation requires the Great Lakes states to enact measures for limiting water withdrawals that can cause adverse ecosystem impacts. This study explores how both hydrogeologic and environmental flow limitations may constrain groundwater availability in the Great Lakes Basin. A methodology for calculating maximum allowable pumping rates is presented. Groundwater availability across the basin may be constrained by a combination of hydrogeologic yield and environmental flow limitations varying over both local and regional scales. The results are sensitive to factors such as pumping time, regional and local hydrogeology, streambed conductance, and streamflow depletion limits. Understanding how these restrictions constrain groundwater usage and which hydrogeologic characteristics and spatial variables have the most influence on potential streamflow depletions has important water resources policy and management implications. © 2013, National Ground Water Association.

  7. Constraining groundwater flow model with geochemistry in the FUA and Cabril sites. Use in the ENRESA 2000 PA exercise

    International Nuclear Information System (INIS)

    Samper, J.; Carrera, J.; Bajos, C.; Astudillo, J.; Santiago, J.L.

    1999-01-01

    Hydrogeochemical activities have been a key factor for the verification and constraining of the groundwater flow model developed for the safety assessment of the FUA Uranium mill tailings restoration and the Cabril L/ILW disposal facility. The lesson learned in both sites will be applied to the ground water transport modelling in the current PA exercises (ENRESA 2000). The groundwater flow model in the Cabril site, represents a low permeability fractured media, and was performed using the TRANSIN code series developed by UPC-ENRESA. The hydrogeochemical data obtained from systematic yearly sampling and analysis campaigns were successfully applied to distinguish between local and regional flow and young and old groundwater. The salinity content, mainly the chlorine anion content, was the most critical hydrogeochemical data for constraining the groundwater flow model. (author)

  8. Near-surface compressional and shear wave speeds constrained by body-wave polarization analysis

    Science.gov (United States)

    Park, Sunyoung; Ishii, Miaki

    2018-06-01

    A new technique to constrain near-surface seismic structure that relates body-wave polarization direction to the wave speed immediately beneath a seismic station is presented. The P-wave polarization direction is only sensitive to shear wave speed but not to compressional wave speed, while the S-wave polarization direction is sensitive to both wave speeds. The technique is applied to data from the High-Sensitivity Seismograph Network in Japan, and the results show that the wave speed estimates obtained from polarization analysis are compatible with those from borehole measurements. The lateral variations in wave speeds correlate with geological and physical features such as topography and volcanoes. The technique requires minimal computation resources, and can be used on any number of three-component teleseismic recordings, opening opportunities for non-invasive and inexpensive study of the shallowest (˜100 m) crustal structures.

  9. ANALYSIS OF THE EXTERNAL FACTORS OF INFLUENCE ON INNOVATION ACTIVITY OF AN INDUSTRIAL ENTERPRISE

    Directory of Open Access Journals (Sweden)

    I. A. Salikov

    2014-01-01

    Full Text Available Summary. For successful functioning and development of the enterprise is a need to strive as possible deeper and more dynamic influence on parameters and objects OK-environmental management, primarily due to increase their innovation activity. Innovative activity of enterprises influenced by many factors. They can be classified on the factors of direct influence (micro and factors of indirect impacts (macro. Factors of direct impact of the influence on the pace and scale of development of the enterprise, on its effectiveness, because the whole spectrum of these factors acts as a limiter. Macro factors create the General conditions of existence of the enterprise in the external environment. To analyses these factors approach was used to SNW-analysis. As a result of analysis, factors of micro and macro-were classified on: stimulating, it minesweepers and dissuasive. Also studied were the degree of influence of these factors on the innovative activity of the enterprise. Reviewed rating factors hindering the development of innovation activity of industrial enterprise in Russia. In the result of which identified factors that hinder the development of innovative activity, and justified in the direction of overcoming them. It should be noted that the distinction between enabling and constraining factors is rather thin and conditional. So, the factors initially restraining innovation, at a certain point can be transformed into a stimulus for its development. Accounting for these factors, creation of necessary conditions and introduction of innovations in various aspects of the functioning of industrial enterprises will allow them to provide competitor-term benefits and sustainable development in a rapidly changing environment and the external environment.

  10. Nonlinear Chance Constrained Problems: Optimality Conditions, Regularization and Solvers

    Czech Academy of Sciences Publication Activity Database

    Adam, Lukáš; Branda, Martin

    2016-01-01

    Roč. 170, č. 2 (2016), s. 419-436 ISSN 0022-3239 R&D Projects: GA ČR GA15-00735S Institutional support: RVO:67985556 Keywords : Chance constrained programming * Optimality conditions * Regularization * Algorithms * Free MATLAB codes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.289, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/adam-0460909.pdf

  11. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  12. Butterfly Encryption Scheme for Resource-Constrained Wireless Networks

    Directory of Open Access Journals (Sweden)

    Raghav V. Sampangi

    2015-09-01

    Full Text Available Resource-constrained wireless networks are emerging networks such as Radio Frequency Identification (RFID and Wireless Body Area Networks (WBAN that might have restrictions on the available resources and the computations that can be performed. These emerging technologies are increasing in popularity, particularly in defence, anti-counterfeiting, logistics and medical applications, and in consumer applications with growing popularity of the Internet of Things. With communication over wireless channels, it is essential to focus attention on securing data. In this paper, we present an encryption scheme called Butterfly encryption scheme. We first discuss a seed update mechanism for pseudorandom number generators (PRNG, and employ this technique to generate keys and authentication parameters for resource-constrained wireless networks. Our scheme is lightweight, as in it requires less resource when implemented and offers high security through increased unpredictability, owing to continuously changing parameters. Our work focuses on accomplishing high security through simplicity and reuse. We evaluate our encryption scheme using simulation, key similarity assessment, key sequence randomness assessment, protocol analysis and security analysis.

  13. Butterfly Encryption Scheme for Resource-Constrained Wireless Networks.

    Science.gov (United States)

    Sampangi, Raghav V; Sampalli, Srinivas

    2015-09-15

    Resource-constrained wireless networks are emerging networks such as Radio Frequency Identification (RFID) and Wireless Body Area Networks (WBAN) that might have restrictions on the available resources and the computations that can be performed. These emerging technologies are increasing in popularity, particularly in defence, anti-counterfeiting, logistics and medical applications, and in consumer applications with growing popularity of the Internet of Things. With communication over wireless channels, it is essential to focus attention on securing data. In this paper, we present an encryption scheme called Butterfly encryption scheme. We first discuss a seed update mechanism for pseudorandom number generators (PRNG), and employ this technique to generate keys and authentication parameters for resource-constrained wireless networks. Our scheme is lightweight, as in it requires less resource when implemented and offers high security through increased unpredictability, owing to continuously changing parameters. Our work focuses on accomplishing high security through simplicity and reuse. We evaluate our encryption scheme using simulation, key similarity assessment, key sequence randomness assessment, protocol analysis and security analysis.

  14. Choosing health, constrained choices.

    Science.gov (United States)

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  15. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.

    2012-01-01

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  16. Thermodynamic analysis of the tetragonal to monoclinic transformation in a constrained zirconia microcrystal

    International Nuclear Information System (INIS)

    Garvie, R.C.

    1985-01-01

    A thermodynamic analysis was made of a simple model comprising a transforming t-ZrO 2 microcrystal of size d constrained in a matrix subjected to a hydrostatic tensile stress field. The field generated a critical size range such that a t-particle transformed if dsub(cl) < d < dsub(cu). The lower limit dsub(cl) exists because at this point the maximum energy (supplied by the applied stress) which can be taken up by the crystal is insufficient to drive the transformation. The upper limit dsub(cu) is a consequence of the microcrystal being so large that it transforms spontaneously when the material is cooled to room temperature. Using the thermodynamic (Griffith) approach and assuming that transformation toughening is due to the dilational strain energy, this mechanism accounted for about one-third of the total observed effective surface energy in a peak-aged Ca-PSZ alloy. (author)

  17. Modes of failure of Osteonics constrained tripolar implants: a retrospective analysis of forty-three failed implants.

    Science.gov (United States)

    Guyen, Olivier; Lewallen, David G; Cabanela, Miguel E

    2008-07-01

    The Osteonics constrained tripolar implant has been one of the most commonly used options to manage recurrent instability after total hip arthroplasty. Mechanical failures were expected and have been reported. The purpose of this retrospective review was to identify the observed modes of failure of this device. Forty-three failed Osteonics constrained tripolar implants were revised at our institution between September 1997 and April 2005. All revisions related to the constrained acetabular component only were considered as failures. All of the devices had been inserted for recurrent or intraoperative instability during revision procedures. Seven different methods of implantation were used. Operative reports and radiographs were reviewed to identify the modes of failure. The average time to failure of the forty-three implants was 28.4 months. A total of five modes of failure were observed: failure at the bone-implant interface (type I), which occurred in eleven hips; failure at the mechanisms holding the constrained liner to the metal shell (type II), in six hips; failure of the retaining mechanism of the bipolar component (type III), in ten hips; dislocation of the prosthetic head at the inner bearing of the bipolar component (type IV), in three hips; and infection (type V), in twelve hips. The mode of failure remained unknown in one hip that had been revised at another institution. The Osteonics constrained tripolar total hip arthroplasty implant is a complex device involving many parts. We showed that failure of this device can occur at most of its interfaces. It would therefore appear logical to limit its application to salvage situations.

  18. A Globally Convergent Matrix-Free Method for Constrained Equations and Its Linear Convergence Rate

    Directory of Open Access Journals (Sweden)

    Min Sun

    2014-01-01

    Full Text Available A matrix-free method for constrained equations is proposed, which is a combination of the well-known PRP (Polak-Ribière-Polyak conjugate gradient method and the famous hyperplane projection method. The new method is not only derivative-free, but also completely matrix-free, and consequently, it can be applied to solve large-scale constrained equations. We obtain global convergence of the new method without any differentiability requirement on the constrained equations. Compared with the existing gradient methods for solving such problem, the new method possesses linear convergence rate under standard conditions, and a relax factor γ is attached in the update step to accelerate convergence. Preliminary numerical results show that it is promising in practice.

  19. Oxygen Consumption Constrains Food Intake in Fish Fed Diets Varying in Essential Amino Acid Composition

    NARCIS (Netherlands)

    Subramanian, S.; Geurden, I.; Figueiredo-Silva, A.C.; Nusantoro, S.; Kaushik, S.J.; Verreth, J.A.J.; Schrama, J.W.

    2013-01-01

    Compromisation of food intake when confronted with diets deficient in essential amino acids is a common response of fish and other animals, but the underlying physiological factors are poorly understood. We hypothesize that oxygen consumption of fish is a possible physiological factor constraining

  20. Constraining the mass of the Local Group

    Science.gov (United States)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  1. Attractiveness of foreign investments in Albania: a focused analysis of factors, constrains and policy assessment

    Directory of Open Access Journals (Sweden)

    Blerta Dragusha (Spahija

    2013-01-01

    Determining the factors that attract FDI, and furthermore identify the main characteristics of the host country’s economy, are essential to understand the reason of FDI inflows to a country or region. In the empirical perspective, various studies give different results. More specifically, this paper has focused on determining the factors for and against FDI in Albania.

  2. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  3. Performance Analysis of Constrained Loosely Coupled GPS/INS Integration Solutions

    Directory of Open Access Journals (Sweden)

    Fabio Dovis

    2012-11-01

    Full Text Available The paper investigates approaches for loosely coupled GPS/INS integration. Error performance is calculated using a reference trajectory. A performance improvement can be obtained by exploiting additional map information (for example, a road boundary. A constrained solution has been developed and its performance compared with an unconstrained one. The case of GPS outages is also investigated showing how a Kalman filter that operates on the last received GPS position and velocity measurements provides a performance benefit. Results are obtained by means of simulation studies and real data.

  4. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  5. Investigation of Factors That May Constrain Participation of Sportive and Non-Sportive Recreational Activities Among University Students

    Directory of Open Access Journals (Sweden)

    Nurullah Emir Ekinci

    2014-10-01

    Full Text Available The purpose of this study was to analyze, which recreational sport or non- sport such as cultural/ art activities that university students prefer in their leisure time and underlying reasons that constrains participating in these activities with regard to different variables. Randomly chosen 339 students from The Faculty of Arts and Faculty of Sciences and Engineering at University of Dumlupiınar volunteered for the study. In this research as a data collection tool “Leisure Constraint Scale” was used. During the evaluation of the data in addition to descriptive statistical methods such as Percentage (% and frequency (f Independent Samples t-test and One way Anova were used. As a result it was found that 19.2% participants choose recreational sport activities in their leisure time. In addition, significant differences have emerged between participants’ gender and constrains to leisure in "lack of information", "lack of friends" and "time" sub-dimensions, between age and barriers to leisure in "time" sub-dimension, and between average monthly income levels and constrains to leisure in "individual psychology" and "facilities / services" sub dimensions (p <0.05. But no significant differences were found according to activities that they choose in their leisure time.

  6. Markov chain Monte Carlo analysis to constrain dark matter properties with directional detection

    International Nuclear Information System (INIS)

    Billard, J.; Mayet, F.; Santos, D.

    2011-01-01

    Directional detection is a promising dark matter search strategy. Indeed, weakly interacting massive particle (WIMP)-induced recoils would present a direction dependence toward the Cygnus constellation, while background-induced recoils exhibit an isotropic distribution in the Galactic rest frame. Taking advantage of these characteristic features, and even in the presence of a sizeable background, it has recently been shown that data from forthcoming directional detectors could lead either to a competitive exclusion or to a conclusive discovery, depending on the value of the WIMP-nucleon cross section. However, it is possible to further exploit these upcoming data by using the strong dependence of the WIMP signal with: the WIMP mass and the local WIMP velocity distribution. Using a Markov chain Monte Carlo analysis of recoil events, we show for the first time the possibility to constrain the unknown WIMP parameters, both from particle physics (mass and cross section) and Galactic halo (velocity dispersion along the three axis), leading to an identification of non-baryonic dark matter.

  7. Developing a Coding Scheme to Analyse Creativity in Highly-constrained Design Activities

    DEFF Research Database (Denmark)

    Dekoninck, Elies; Yue, Huang; Howard, Thomas J.

    2010-01-01

    This work is part of a larger project which aims to investigate the nature of creativity and the effectiveness of creativity tools in highly-constrained design tasks. This paper presents the research where a coding scheme was developed and tested with a designer-researcher who conducted two rounds...... of design and analysis on a highly constrained design task. This paper shows how design changes can be coded using a scheme based on creative ‘modes of change’. The coding scheme can show the way a designer moves around the design space, and particularly the strategies that are used by a creative designer...... larger study with more designers working on different types of highly-constrained design task is needed, in order to draw conclusions on the modes of change and their relationship to creativity....

  8. A generalized fuzzy credibility-constrained linear fractional programming approach for optimal irrigation water allocation under uncertainty

    Science.gov (United States)

    Zhang, Chenglong; Guo, Ping

    2017-10-01

    The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.

  9. CAMPways: constrained alignment framework for the comparative analysis of a pair of metabolic pathways.

    Science.gov (United States)

    Abaka, Gamze; Bıyıkoğlu, Türker; Erten, Cesim

    2013-07-01

    Given a pair of metabolic pathways, an alignment of the pathways corresponds to a mapping between similar substructures of the pair. Successful alignments may provide useful applications in phylogenetic tree reconstruction, drug design and overall may enhance our understanding of cellular metabolism. We consider the problem of providing one-to-many alignments of reactions in a pair of metabolic pathways. We first provide a constrained alignment framework applicable to the problem. We show that the constrained alignment problem even in a primitive setting is computationally intractable, which justifies efforts for designing efficient heuristics. We present our Constrained Alignment of Metabolic Pathways (CAMPways) algorithm designed for this purpose. Through extensive experiments involving a large pathway database, we demonstrate that when compared with a state-of-the-art alternative, the CAMPways algorithm provides better alignment results on metabolic networks as far as measures based on same-pathway inclusion and biochemical significance are concerned. The execution speed of our algorithm constitutes yet another important improvement over alternative algorithms. Open source codes, executable binary, useful scripts, all the experimental data and the results are freely available as part of the Supplementary Material at http://code.google.com/p/campways/. Supplementary data are available at Bioinformatics online.

  10. Chance Constrained Input Relaxation to Congestion in Stochastic DEA. An Application to Iranian Hospitals.

    Science.gov (United States)

    Kheirollahi, Hooshang; Matin, Behzad Karami; Mahboubi, Mohammad; Alavijeh, Mehdi Mirzaei

    2015-01-01

    This article developed an approached model of congestion, based on relaxed combination of inputs, in stochastic data envelopment analysis (SDEA) with chance constrained programming approaches. Classic data envelopment analysis models with deterministic data have been used by many authors to identify congestion and estimate its levels; however, data envelopment analysis with stochastic data were rarely used to identify congestion. This article used chance constrained programming approaches to replace stochastic models with "deterministic equivalents". This substitution leads us to non-linear problems that should be solved. Finally, the proposed method based on relaxed combination of inputs was used to identify congestion input in six Iranian hospital with one input and two outputs in the period of 2009 to 2012.

  11. Maximum forseeable accident analysis made by a sodium leak on the BN-800 primary circuit and the more constraining accident development scenario

    International Nuclear Information System (INIS)

    Ivanenko, V.N.; Zybin, V.A.

    1988-01-01

    In this paper the different ways of development for the BN-800 maximum credible accident in case of loss and fire of primary sodium are examined. The more constraining scenario is presented. During the scenario analysis the accidental release of radioactive materials in the environment has been studied. These releases are below the authorized values [fr

  12. Determining the Number of Factors in P-Technique Factor Analysis

    Science.gov (United States)

    Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael

    2017-01-01

    Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…

  13. A first-order multigrid method for bound-constrained convex optimization

    Czech Academy of Sciences Publication Activity Database

    Kočvara, Michal; Mohammed, S.

    2016-01-01

    Roč. 31, č. 3 (2016), s. 622-644 ISSN 1055-6788 R&D Projects: GA ČR(CZ) GAP201/12/0671 Grant - others:European Commission - EC(XE) 313781 Institutional support: RVO:67985556 Keywords : bound-constrained optimization * multigrid methods * linear complementarity problems Subject RIV: BA - General Mathematics Impact factor: 1.023, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/kocvara-0460326.pdf

  14. A Beginner’s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis

    Directory of Open Access Journals (Sweden)

    An Gie Yong

    2013-10-01

    Full Text Available The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.

  15. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....

  16. Modeling constrained sintering of bi-layered tubular structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Kothanda Ramachandran, Dhavanesan; Ni, De Wei

    2015-01-01

    Constrained sintering of tubular bi-layered structures is being used in the development of various technologies. Densification mismatch between the layers making the tubular bi-layer can generate stresses, which may create processing defects. An analytical model is presented to describe the densi...... and thermo-mechanical analysis. Results from the analytical model are found to agree well with finite element simulations as well as measurements from sintering experiment....

  17. Portuguese pellets market: Analysis of the production and utilization constrains

    International Nuclear Information System (INIS)

    Monteiro, Eliseu; Mantha, Vishveshwar; Rouboa, Abel

    2012-01-01

    As opposite in Portugal, the wood pellets market is booming in Europe. In this work, possible reasons for this market behavior are foreseen according to the key indicators of biomass availability, costs and legal framework. Two major constrains are found in the Portuguese pellets market: the first one is the lack of an internal consumption, being the market based on exportations. The second one is the shortage of raw material mainly due to the competition with the biomass power plants. Therefore, the combination of the biomass power plants with pellet production plants seems to be the best option for the pellets production in the actual Portuguese scenario. The main constrains for pellets market has been to convince small-scale customers that pellets are a good alternative fuel, mainly due to the investment needed and the strong competition with natural gas. Besides some benefits in the acquisition of new equipment for renewable energy, they are insufficient to cover the huge discrepancy of the investment in pellets heating. However, pellets are already economic interesting for large utilizations. In order cover a large amount of households, additional public support is needed to cover the supplementary costs of the pellets heating systems. - Highlights: ► There is a lack of internal consumption being the pellets market based on exportation. ► The shortage of raw material is mainly due to the biomass power plants. ► Combining pellet plants with biomass power plants seems to be a wise solution. ► The tax benefits of renewable energy equipments are not enough to cover the higher investment. ► Pellets are already economic interesting for large utilizations in the Portuguese scenario.

  18. An easy guide to factor analysis

    CERN Document Server

    Kline, Paul

    2014-01-01

    Factor analysis is a statistical technique widely used in psychology and the social sciences. With the advent of powerful computers, factor analysis and other multivariate methods are now available to many more people. An Easy Guide to Factor Analysis presents and explains factor analysis as clearly and simply as possible. The author, Paul Kline, carefully defines all statistical terms and demonstrates step-by-step how to work out a simple example of principal components analysis and rotation. He further explains other methods of factor analysis, including confirmatory and path analysis, a

  19. Constrained optimization of test intervals using a steady-state genetic algorithm

    International Nuclear Information System (INIS)

    Martorell, S.; Carlos, S.; Sanchez, A.; Serradell, V.

    2000-01-01

    There is a growing interest from both the regulatory authorities and the nuclear industry to stimulate the use of Probabilistic Risk Analysis (PRA) for risk-informed applications at Nuclear Power Plants (NPPs). Nowadays, special attention is being paid on analyzing plant-specific changes to Test Intervals (TIs) within the Technical Specifications (TSs) of NPPs and it seems to be a consensus on the need of making these requirements more risk-effective and less costly. Resource versus risk-control effectiveness principles formally enters in optimization problems. This paper presents an approach for using the PRA models in conducting the constrained optimization of TIs based on a steady-state genetic algorithm (SSGA) where the cost or the burden is to be minimized while the risk or performance is constrained to be at a given level, or vice versa. The paper encompasses first with the problem formulation, where the objective function and constraints that apply in the constrained optimization of TIs based on risk and cost models at system level are derived. Next, the foundation of the optimizer is given, which is derived by customizing a SSGA in order to allow optimizing TIs under constraints. Also, a case study is performed using this approach, which shows the benefits of adopting both PRA models and genetic algorithms, in particular for the constrained optimization of TIs, although it is also expected a great benefit of using this approach to solve other engineering optimization problems. However, care must be taken in using genetic algorithms in constrained optimization problems as it is concluded in this paper

  20. Analysis of neutron and x-ray reflectivity data by constrained least-squares methods

    DEFF Research Database (Denmark)

    Pedersen, J.S.; Hamley, I.W.

    1994-01-01

    . The coefficients in the series are determined by constrained nonlinear least-squares methods, in which the smoothest solution that agrees with the data is chosen. In the second approach the profile is expressed as a series of sine and cosine terms. A smoothness constraint is used which reduces the coefficients...

  1. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  2. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  3. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  4. Feature constrained compressed sensing CT image reconstruction from incomplete data via robust principal component analysis of the database

    International Nuclear Information System (INIS)

    Wu, Dufan; Li, Liang; Zhang, Li

    2013-01-01

    In computed tomography (CT), incomplete data problems such as limited angle projections often cause artifacts in the reconstruction results. Additional prior knowledge of the image has shown the potential for better results, such as a prior image constrained compressed sensing algorithm. While a pre-full-scan of the same patient is not always available, massive well-reconstructed images of different patients can be easily obtained from clinical multi-slice helical CTs. In this paper, a feature constrained compressed sensing (FCCS) image reconstruction algorithm was proposed to improve the image quality by using the prior knowledge extracted from the clinical database. The database consists of instances which are similar to the target image but not necessarily the same. Robust principal component analysis is employed to retrieve features of the training images to sparsify the target image. The features form a low-dimensional linear space and a constraint on the distance between the image and the space is used. A bi-criterion convex program which combines the feature constraint and total variation constraint is proposed for the reconstruction procedure and a flexible method is adopted for a good solution. Numerical simulations on both the phantom and real clinical patient images were taken to validate our algorithm. Promising results are shown for limited angle problems. (paper)

  5. Coevolutionary particle swarm optimization using Gaussian distribution for solving constrained optimization problems.

    Science.gov (United States)

    Krohling, Renato A; Coelho, Leandro dos Santos

    2006-12-01

    In this correspondence, an approach based on coevolutionary particle swarm optimization to solve constrained optimization problems formulated as min-max problems is presented. In standard or canonical particle swarm optimization (PSO), a uniform probability distribution is used to generate random numbers for the accelerating coefficients of the local and global terms. We propose a Gaussian probability distribution to generate the accelerating coefficients of PSO. Two populations of PSO using Gaussian distribution are used on the optimization algorithm that is tested on a suite of well-known benchmark constrained optimization problems. Results have been compared with the canonical PSO (constriction factor) and with a coevolutionary genetic algorithm. Simulation results show the suitability of the proposed algorithm in terms of effectiveness and robustness.

  6. Affine Lie algebraic origin of constrained KP hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Gomes, J.F.; Zimerman, A.H.

    1994-07-01

    It is presented an affine sl(n+1) algebraic construction of the basic constrained KP hierarchy. This hierarchy is analyzed using two approaches, namely linear matrix eigenvalue problem on hermitian symmetric space and constrained KP Lax formulation and we show that these approaches are equivalent. The model is recognized to be generalized non-linear Schroedinger (GNLS) hierarchy and it is used as a building block for a new class of constrained KP hierarchies. These constrained KP hierarchies are connected via similarity-Backlund transformations and interpolate between GNLS and multi-boson KP-Toda hierarchies. The construction uncovers origin of the Toda lattice structure behind the latter hierarchy. (author). 23 refs

  7. Comparison of preconditioned Krylov subspace iteration methods for PDE-constrained optimization problems

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Farouq, S.; Neytcheva, M.

    2017-01-01

    Roč. 74, č. 1 (2017), s. 19-37 ISSN 1017-1398 Institutional support: RVO:68145535 Keywords : PDE-constrained optimization problems * finite elements * iterative solution methods * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.241, year: 2016 https://link.springer.com/article/10.1007%2Fs11075-016-0136-5

  8. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  9. Cascading Constrained 2-D Arrays using Periodic Merging Arrays

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Laursen, Torben Vaarby

    2003-01-01

    We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes...

  10. How market environment may constrain global franchising in emerging markets

    OpenAIRE

    Baena Graciá, Verónica

    2011-01-01

    Although emerging markets are some of the fastest growing economies in the world and represent countries that are experiencing a substantial economic transformation, little is known about the factors influencing country selection for expansion in those markets. In an attempt to enhance the knowledge that managers and scholars have on franchising expansion, the present study examines how market conditions may constrain international diffusion of franchising in emerging markets. They are: i) ge...

  11. Iris recognition in less constrained environments: a video-based approach

    OpenAIRE

    Mahadeo, Nitin Kumar

    2017-01-01

    This dissertation focuses on iris biometrics. Although the iris is the most accurate biometric, its adoption has been relatively slow. Conventional iris recognition systems utilize still eye images captured in ideal environments and require highly constrained subject presentation. A drop in recognition performance is observed when these constraints are removed as the quality of the data acquired is affected by heterogeneous factors. For iris recognition to be widely adopted, it can therefore ...

  12. Data-constrained reionization and its effects on cosmological parameters

    International Nuclear Information System (INIS)

    Pandolfi, S.; Ferrara, A.; Choudhury, T. Roy; Mitra, S.; Melchiorri, A.

    2011-01-01

    We perform an analysis of the recent WMAP7 data considering physically motivated and viable reionization scenarios with the aim of assessing their effects on cosmological parameter determinations. The main novelties are: (i) the combination of cosmic microwave background data with astrophysical results from quasar absorption line experiments; (ii) the joint variation of both the cosmological and astrophysical [governing the evolution of the free electron fraction x e (z)] parameters. Including a realistic, data-constrained reionization history in the analysis induces appreciable changes in the cosmological parameter values deduced through a standard WMAP7 analysis. Particularly noteworthy are the variations in Ω b h 2 =0.02258 -0.00056 +0.00057 [WMAP7 (Sudden)] vs Ω b h 2 =0.02183±0.00054[WMAP7+ASTRO (CF)] and the new constraints for the scalar spectral index, for which WMAP7+ASTRO (CF) excludes the Harrison-Zel'dovich value n s =1 at >3σ. Finally, the electron-scattering optical depth value is considerably decreased with respect to the standard WMAP7, i.e. τ e =0.080±0.012. We conclude that the inclusion of astrophysical data sets, allowing to robustly constrain the reionization history, in the extraction procedure of cosmological parameters leads to relatively important differences in the final determination of their values.

  13. Comparison of preconditioned Krylov subspace iteration methods for PDE-constrained optimization problems

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Farouq, S.; Neytcheva, M.

    2017-01-01

    Roč. 74, č. 1 (2017), s. 19-37 ISSN 1017-1398 Institutional support: RVO:68145535 Keywords : PDE-constrained optimization problems * finite elements * iterative solution method s * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.241, year: 2016 https://link.springer.com/article/10.1007%2Fs11075-016-0136-5

  14. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  15. Mature Basin Development Portfolio Management in a Resource Constrained Environment

    International Nuclear Information System (INIS)

    Mandhane, J. M.; Udo, S. D.

    2002-01-01

    Nigerian Petroleum industry is constantly faced with management of resource constraints stemming from capital and operating budget, availability of skilled manpower, capacity of an existing surface facility, size of well assets, amount of soft and hard information, etceteras. Constrained capital forces the industry to rank subsurface resource and potential before proceeding with preparation of development scenarios. Availability of skilled manpower limits scope of integrated reservoir studies. Level of information forces technical and management to find low-risk development alternative in a limited time. Volume of either oil or natural gas or water or combination of them may be constrained due to design limits of the existing facility, or an external OPEC quota, requires high portfolio management skills.The first part of the paper statistically analyses development portfolio of a mature basin for (a) subsurface resources volume, (b) developed and undeveloped and undeveloped volumes, (c) sweating of wells, and (d) facility assets. The analysis presented conclusively demonstrates that the 80/20 is active in the statistical sample. The 80/20 refers to 80% of the effect coming from the 20% of the cause. The second part of the paper deals with how 80/20 could be applied to manage portfolio for a given set of constraints. Three application examples are discussed. Feedback on implementation of them resulting in focussed resource management with handsome rewards is documented.The statistical analysis and application examples from a mature basin form a way forward for a development portfolio management in an resource constrained environment

  16. CMT: a constrained multi-level thresholding approach for ChIP-Seq data analysis.

    Directory of Open Access Journals (Sweden)

    Iman Rezaeian

    Full Text Available Genome-wide profiling of DNA-binding proteins using ChIP-Seq has emerged as an alternative to ChIP-chip methods. ChIP-Seq technology offers many advantages over ChIP-chip arrays, including but not limited to less noise, higher resolution, and more coverage. Several algorithms have been developed to take advantage of these abilities and find enriched regions by analyzing ChIP-Seq data. However, the complexity of analyzing various patterns of ChIP-Seq signals still needs the development of new algorithms. Most current algorithms use various heuristics to detect regions accurately. However, despite how many formulations are available, it is still difficult to accurately determine individual peaks corresponding to each binding event. We developed Constrained Multi-level Thresholding (CMT, an algorithm used to detect enriched regions on ChIP-Seq data. CMT employs a constraint-based module that can target regions within a specific range. We show that CMT has higher accuracy in detecting enriched regions (peaks by objectively assessing its performance relative to other previously proposed peak finders. This is shown by testing three algorithms on the well-known FoxA1 Data set, four transcription factors (with a total of six antibodies for Drosophila melanogaster and the H3K4ac antibody dataset.

  17. Factor analysis

    CERN Document Server

    Gorsuch, Richard L

    2013-01-01

    Comprehensive and comprehensible, this classic covers the basic and advanced topics essential for using factor analysis as a scientific tool in psychology, education, sociology, and related areas. Emphasizing the usefulness of the techniques, it presents sufficient mathematical background for understanding and sufficient discussion of applications for effective use. This includes not only theory but also the empirical evaluations of the importance of mathematical distinctions for applied scientific analysis.

  18. Selection of magnetorheological brake types via optimal design considering maximum torque and constrained volume

    International Nuclear Information System (INIS)

    Nguyen, Q H; Choi, S B

    2012-01-01

    This research focuses on optimal design of different types of magnetorheological brakes (MRBs), from which an optimal selection of MRB types is identified. In the optimization, common types of MRB such as disc-type, drum-type, hybrid-types, and T-shaped type are considered. The optimization problem is to find the optimal value of significant geometric dimensions of the MRB that can produce a maximum braking torque. The MRB is constrained in a cylindrical volume of a specific radius and length. After a brief description of the configuration of MRB types, the braking torques of the MRBs are derived based on the Herschel–Bulkley model of the MR fluid. The optimal design of MRBs constrained in a specific cylindrical volume is then analysed. The objective of the optimization is to maximize the braking torque while the torque ratio (the ratio of maximum braking torque and the zero-field friction torque) is constrained to be greater than a certain value. A finite element analysis integrated with an optimization tool is employed to obtain optimal solutions of the MRBs. Optimal solutions of MRBs constrained in different volumes are obtained based on the proposed optimization procedure. From the results, discussions on the optimal selection of MRB types depending on constrained volumes are given. (paper)

  19. On the origin of constrained superfields

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, G. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Dudas, E. [Centre de Physique Théorique, École Polytechnique, CNRS, Université Paris-Saclay,F-91128 Palaiseau (France); Farakos, F. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-05-06

    In this work we analyze constrained superfields in supersymmetry and supergravity. We propose a constraint that, in combination with the constrained goldstino multiplet, consistently removes any selected component from a generic superfield. We also describe its origin, providing the operators whose equations of motion lead to the decoupling of such components. We illustrate our proposal by means of various examples and show how known constraints can be reproduced by our method.

  20. Measurement model and calibration experiment of over-constrained parallel six-dimensional force sensor based on stiffness characteristics analysis

    International Nuclear Information System (INIS)

    Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua

    2017-01-01

    An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model changes when the measurement branches of the sensor are under tensile or compressive force. This study establishes a general measurement model for an over-constrained parallel six-dimensional force sensor considering the different branch tensions and compression stiffness values. Numerical calculations and analyses are performed using practical examples. Based on the parallel mechanism, an over-constrained, orthogonal structure is proposed for a six-dimensional force sensor. Hence, a prototype is designed and developed, and a calibration experiment is conducted. The measurement accuracy of the sensor is improved based on the measurement model under different branch tensions and compression stiffness values. Moreover, the largest class I error is reduced from 5.81 to 2.23% full scale (FS), and the largest class II error is reduced from 3.425 to 1.871% FS. (paper)

  1. Exploratory Bi-factor Analysis: The Oblique Case

    OpenAIRE

    Jennrich, Robert L.; Bentler, Peter M.

    2011-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori. They use exploratory factor analysis and a bi-factor rotation criterion designed to produce a rotated loading mat...

  2. Towards weakly constrained double field theory

    Directory of Open Access Journals (Sweden)

    Kanghoon Lee

    2016-08-01

    Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  3. Operator approach to solutions of the constrained BKP hierarchy

    International Nuclear Information System (INIS)

    Shen, Hsin-Fu; Lee, Niann-Chern; Tu, Ming-Hsien

    2011-01-01

    The operator formalism to the vector k-constrained BKP hierarchy is presented. We solve the Hirota bilinear equations of the vector k-constrained BKP hierarchy via the method of neutral free fermion. In particular, by choosing suitable group element of O(∞), we construct rational and soliton solutions of the vector k-constrained BKP hierarchy.

  4. Challenges in constraining anthropogenic aerosol effects on cloud radiative forcing using present-day spatiotemporal variability.

    Science.gov (United States)

    Ghan, Steven; Wang, Minghuai; Zhang, Shipeng; Ferrachat, Sylvaine; Gettelman, Andrew; Griesfeller, Jan; Kipling, Zak; Lohmann, Ulrike; Morrison, Hugh; Neubauer, David; Partridge, Daniel G; Stier, Philip; Takemura, Toshihiko; Wang, Hailong; Zhang, Kai

    2016-05-24

    A large number of processes are involved in the chain from emissions of aerosol precursor gases and primary particles to impacts on cloud radiative forcing. Those processes are manifest in a number of relationships that can be expressed as factors dlnX/dlnY driving aerosol effects on cloud radiative forcing. These factors include the relationships between cloud condensation nuclei (CCN) concentration and emissions, droplet number and CCN concentration, cloud fraction and droplet number, cloud optical depth and droplet number, and cloud radiative forcing and cloud optical depth. The relationship between cloud optical depth and droplet number can be further decomposed into the sum of two terms involving the relationship of droplet effective radius and cloud liquid water path with droplet number. These relationships can be constrained using observations of recent spatial and temporal variability of these quantities. However, we are most interested in the radiative forcing since the preindustrial era. Because few relevant measurements are available from that era, relationships from recent variability have been assumed to be applicable to the preindustrial to present-day change. Our analysis of Aerosol Comparisons between Observations and Models (AeroCom) model simulations suggests that estimates of relationships from recent variability are poor constraints on relationships from anthropogenic change for some terms, with even the sign of some relationships differing in many regions. Proxies connecting recent spatial/temporal variability to anthropogenic change, or sustained measurements in regions where emissions have changed, are needed to constrain estimates of anthropogenic aerosol impacts on cloud radiative forcing.

  5. How will greenhouse gas emissions from motor vehicles be constrained in China around 2030?

    International Nuclear Information System (INIS)

    Zheng, Bo; Zhang, Qiang; Borken-Kleefeld, Jens; Huo, Hong; Guan, Dabo; Klimont, Zbigniew; Peters, Glen P.; He, Kebin

    2015-01-01

    Highlights: • We build a projection model to predict vehicular GHG emissions on provincial basis. • Fuel efficiency gains cannot constrain vehicle GHGs in major southern provinces. • We propose an integrated policy set through sensitivity analysis of policy options. • The policy set will peak GHG emissions of 90% provinces and whole China by 2030. - Abstract: Increasing emissions from road transportation endanger China’s objective to reduce national greenhouse gas (GHG) emissions. The unconstrained growth of vehicle GHG emissions are mainly caused by the insufficient improvement of energy efficiency (kilometers traveled per unit energy use) under current policies, which cannot offset the explosion of vehicle activity in China, especially the major southern provinces. More stringent polices are required to decline GHG emissions in these provinces, and thereby help to constrain national total emissions. In this work, we make a provincial-level projection for vehicle growth, energy demand and GHG emissions to evaluate vehicle GHG emission trends under various policy options in China and determine the way to constrain national emissions. Through sensitivity analysis of various single policies, we propose an integrated policy set to assure the objective of peak national vehicle GHG emissions be achieved around 2030. The integrated policy involves decreasing the use of urban light-duty vehicles by 25%, improving fuel economy by 25% by 2035 comparing 2020, and promoting electric vehicles and biofuels. The stringent new policies would allow China to constrain GHG emissions from road transport sector around 2030. This work provides a perspective to understand vehicle GHG emission growth patterns in China’s provinces, and proposes a strong policy combination to constrain national GHG emissions, which can support the achievement of peak GHG emissions by 2030 promised by the Chinese government

  6. Cross-training workers in dual resource constrained systems with heterogeneous processing times

    NARCIS (Netherlands)

    Bokhorst, J. A. C.; Gaalman, G. J. C.

    2009-01-01

    In this paper, we explore the effect of cross-training workers in Dual Resource Constrained (DRC) systems with machines having different mean processing times. By means of queuing and simulation analysis, we show that the detrimental effects of pooling (cross-training) previously found in single

  7. Factors affecting construction performance: exploratory factor analysis

    Science.gov (United States)

    Soewin, E.; Chinda, T.

    2018-04-01

    The present work attempts to develop a multidimensional performance evaluation framework for a construction company by considering all relevant measures of performance. Based on the previous studies, this study hypothesizes nine key factors, with a total of 57 associated items. The hypothesized factors, with their associated items, are then used to develop questionnaire survey to gather data. The exploratory factor analysis (EFA) was applied to the collected data which gave rise 10 factors with 57 items affecting construction performance. The findings further reveal that the items constituting ten key performance factors (KPIs) namely; 1) Time, 2) Cost, 3) Quality, 4) Safety & Health, 5) Internal Stakeholder, 6) External Stakeholder, 7) Client Satisfaction, 8) Financial Performance, 9) Environment, and 10) Information, Technology & Innovation. The analysis helps to develop multi-dimensional performance evaluation framework for an effective measurement of the construction performance. The 10 key performance factors can be broadly categorized into economic aspect, social aspect, environmental aspect, and technology aspects. It is important to understand a multi-dimension performance evaluation framework by including all key factors affecting the construction performance of a company, so that the management level can effectively plan to implement an effective performance development plan to match with the mission and vision of the company.

  8. Volatility-constrained multifractal detrended cross-correlation analysis: Cross-correlation among Mainland China, US, and Hong Kong stock markets

    Science.gov (United States)

    Cao, Guangxi; Zhang, Minjia; Li, Qingchen

    2017-04-01

    This study focuses on multifractal detrended cross-correlation analysis of the different volatility intervals of Mainland China, US, and Hong Kong stock markets. A volatility-constrained multifractal detrended cross-correlation analysis (VC-MF-DCCA) method is proposed to study the volatility conductivity of Mainland China, US, and Hong Kong stock markets. Empirical results indicate that fluctuation may be related to important activities in real markets. The Hang Seng Index (HSI) stock market is more influential than the Shanghai Composite Index (SCI) stock market. Furthermore, the SCI stock market is more influential than the Dow Jones Industrial Average stock market. The conductivity between the HSI and SCI stock markets is the strongest. HSI was the most influential market in the large fluctuation interval of 1991 to 2014. The autoregressive fractionally integrated moving average method is used to verify the validity of VC-MF-DCCA. Results show that VC-MF-DCCA is effective.

  9. Constraining cosmic scatter in the Galactic halo through a differential analysis of metal-poor stars

    Science.gov (United States)

    Reggiani, Henrique; Meléndez, Jorge; Kobayashi, Chiaki; Karakas, Amanda; Placco, Vinicius

    2017-12-01

    Context. The chemical abundances of metal-poor halo stars are important to understanding key aspects of Galactic formation and evolution. Aims: We aim to constrain Galactic chemical evolution with precise chemical abundances of metal-poor stars (-2.8 ≤ [Fe/H] ≤ -1.5). Methods: Using high resolution and high S/N UVES spectra of 23 stars and employing the differential analysis technique we estimated stellar parameters and obtained precise LTE chemical abundances. Results: We present the abundances of Li, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Co, Ni, Zn, Sr, Y, Zr, and Ba. The differential technique allowed us to obtain an unprecedented low level of scatter in our analysis, with standard deviations as low as 0.05 dex, and mean errors as low as 0.05 dex for [X/Fe]. Conclusions: By expanding our metallicity range with precise abundances from other works, we were able to precisely constrain Galactic chemical evolution models in a wide metallicity range (-3.6 ≤ [Fe/H] ≤ -0.4). The agreements and discrepancies found are key for further improvement of both models and observations. We also show that the LTE analysis of Cr II is a much more reliable source of abundance for chromium, as Cr I has important NLTE effects. These effects can be clearly seen when we compare the observed abundances of Cr I and Cr II with GCE models. While Cr I has a clear disagreement between model and observations, Cr II is very well modeled. We confirm tight increasing trends of Co and Zn toward lower metallicities, and a tight flat evolution of Ni relative to Fe. Our results strongly suggest inhomogeneous enrichment from hypernovae. Our precise stellar parameters results in a low star-to-star scatter (0.04 dex) in the Li abundances of our sample, with a mean value about 0.4 dex lower than the prediction from standard Big Bang nucleosynthesis; we also study the relation between lithium depletion and stellar mass, but it is difficult to assess a correlation due to the limited mass range. We

  10. A factor analysis to detect factors influencing building national brand

    Directory of Open Access Journals (Sweden)

    Naser Azad

    Full Text Available Developing a national brand is one of the most important issues for development of a brand. In this study, we present factor analysis to detect the most important factors in building a national brand. The proposed study uses factor analysis to extract the most influencing factors and the sample size has been chosen from two major auto makers in Iran called Iran Khodro and Saipa. The questionnaire was designed in Likert scale and distributed among 235 experts. Cronbach alpha is calculated as 84%, which is well above the minimum desirable limit of 0.70. The implementation of factor analysis provides six factors including “cultural image of customers”, “exciting characteristics”, “competitive pricing strategies”, “perception image” and “previous perceptions”.

  11. Constraining the magnitude of the largest event in a foreshock-main shock-aftershock sequence

    Science.gov (United States)

    Shcherbakov, Robert; Zhuang, Jiancang; Ogata, Yosihiko

    2018-01-01

    Extreme value statistics and Bayesian methods are used to constrain the magnitudes of the largest expected earthquakes in a sequence governed by the parametric time-dependent occurrence rate and frequency-magnitude statistics. The Bayesian predictive distribution for the magnitude of the largest event in a sequence is derived. Two types of sequences are considered, that is, the classical aftershock sequences generated by large main shocks and the aftershocks generated by large foreshocks preceding a main shock. For the former sequences, the early aftershocks during a training time interval are used to constrain the magnitude of the future extreme event during the forecasting time interval. For the latter sequences, the earthquakes preceding the main shock are used to constrain the magnitudes of the subsequent extreme events including the main shock. The analysis is applied retrospectively to past prominent earthquake sequences.

  12. Should we still believe in constrained supersymmetry?

    International Nuclear Information System (INIS)

    Balazs, Csaba; Buckley, Andy; Carter, Daniel; Farmer, Benjamin; White, Martin

    2013-01-01

    We calculate partial Bayes factors to quantify how the feasibility of the constrained minimal supersymmetric standard model (CMSSM) has changed in the light of a series of observations. This is done in the Bayesian spirit where probability reflects a degree of belief in a proposition and Bayes' theorem tells us how to update it after acquiring new information. Our experimental baseline is the approximate knowledge that was available before LEP, and our comparison model is the Standard Model with a simple dark matter candidate. To quantify the amount by which experiments have altered our relative belief in the CMSSM since the baseline data we compute the partial Bayes factors that arise from learning in sequence the LEP Higgs constraints, the XENON100 dark matter constraints, the 2011 LHC supersymmetry search results, and the early 2012 LHC Higgs search results. We find that LEP and the LHC strongly shatter our trust in the CMSSM (with M 0 and M 1/2 below 2 TeV), reducing its posterior odds by approximately two orders of magnitude. This reduction is largely due to substantial Occam factors induced by the LEP and LHC Higgs searches. (orig.)

  13. Factor analysis of multivariate data

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A.; Mahadevan, R.

    A brief introduction to factor analysis is presented. A FORTRAN program, which can perform the Q-mode and R-mode factor analysis and the singular value decomposition of a given data matrix is presented in Appendix B. This computer program, uses...

  14. Factor analysis and scintigraphy

    International Nuclear Information System (INIS)

    Di Paola, R.; Penel, C.; Bazin, J.P.; Berche, C.

    1976-01-01

    The goal of factor analysis is usually to achieve reduction of a large set of data, extracting essential features without previous hypothesis. Due to the development of computerized systems, the use of largest sampling, the possibility of sequential data acquisition and the increase of dynamic studies, the problem of data compression can be encountered now in routine. Thus, results obtained for compression of scintigraphic images were first presented. Then possibilities given by factor analysis for scan processing were discussed. At last, use of this analysis for multidimensional studies and specially dynamic studies were considered for compression and processing [fr

  15. Mantle viscosity structure constrained by joint inversions of seismic velocities and density

    Science.gov (United States)

    Rudolph, M. L.; Moulik, P.; Lekic, V.

    2017-12-01

    The viscosity structure of Earth's deep mantle affects the thermal evolution of Earth, the ascent of mantle upwellings, sinking of subducted oceanic lithosphere, and the mixing of compositional heterogeneities in the mantle. Modeling the long-wavelength dynamic geoid allows us to constrain the radial viscosity profile of the mantle. Typically, in inversions for the mantle viscosity structure, wavespeed variations are mapped into density variations using a constant- or depth-dependent scaling factor. Here, we use a newly developed joint model of anisotropic Vs, Vp, density and transition zone topographies to generate a suite of solutions for the mantle viscosity structure directly from the seismologically constrained density structure. The density structure used to drive our forward models includes contributions from both thermal and compositional variations, including important contributions from compositionally dense material in the Large Low Velocity Provinces at the base of the mantle. These compositional variations have been neglected in the forward models used in most previous inversions and have the potential to significantly affect large-scale flow and thus the inferred viscosity structure. We use a transdimensional, hierarchical, Bayesian approach to solve the inverse problem, and our solutions for viscosity structure include an increase in viscosity below the base of the transition zone, in the shallow lower mantle. Using geoid dynamic response functions and an analysis of the correlation between the observed geoid and mantle structure, we demonstrate the underlying reason for this inference. Finally, we present a new family of solutions in which the data uncertainty is accounted for using covariance matrices associated with the mantle structure models.

  16. Null Space Integration Method for Constrained Multibody Systems with No Constraint Violation

    International Nuclear Information System (INIS)

    Terze, Zdravko; Lefeber, Dirk; Muftic, Osman

    2001-01-01

    A method for integrating equations of motion of constrained multibody systems with no constraint violation is presented. A mathematical model, shaped as a differential-algebraic system of index 1, is transformed into a system of ordinary differential equations using the null-space projection method. Equations of motion are set in a non-minimal form. During integration, violations of constraints are corrected by solving constraint equations at the position and velocity level, utilizing the metric of the system's configuration space, and projective criterion to the coordinate partitioning method. The method is applied to dynamic simulation of 3D constrained biomechanical system. The simulation results are evaluated by comparing them to the values of characteristic parameters obtained by kinematics analysis of analyzed motion based unmeasured kinematics data

  17. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  18. Constraining reconnection region conditions using imaging and spectroscopic analysis of a coronal jet

    Science.gov (United States)

    Brannon, Sean; Kankelborg, Charles

    2017-08-01

    Coronal jets typically appear as thin, collimated structures in EUV and X-ray wavelengths, and are understood to be initiated by magnetic reconnection in the lower corona or upper chromosphere. Plasma that is heated and accelerated upward into coronal jets may therefore carry indirect information on conditions in the reconnection region and current sheet located at the jet base. On 2017 October 14, the Interface Region Imaging Spectrograph (IRIS) and Solar Dynamics Observatory Atmospheric Imaging Assembly (SDO/AIA) observed a series of jet eruptions originating from NOAA AR 12599. The jet structure has a length-to-width ratio that exceeds 50, and remains remarkably straight throughout its evolution. Several times during the observation bright blobs of plasma are seen to erupt upward, ascending and subsequently descending along the structure. These blobs are cotemporal with footpoint and arcade brightenings, which we believe indicates multiple episodes of reconnection at the structure base. Through imaging and spectroscopic analysis of jet and footpoint plasma we determine a number of properties, including the line-of-sight inclination, the temperature and density structure, and lift-off velocities and accelerations of jet eruptions. We use these properties to constrain the geometry of the jet structure and conditions in reconnection region.

  19. Constrained consequence

    CSIR Research Space (South Africa)

    Britz, K

    2011-09-01

    Full Text Available their basic properties and relationship. In Section 3 we present a modal instance of these constructions which also illustrates with an example how to reason abductively with constrained entailment in a causal or action oriented context. In Section 4 we... of models with the former approach, whereas in Section 3.3 we give an example illustrating ways in which C can be de ned with both. Here we employ the following versions of local consequence: De nition 3.4. Given a model M = hW;R;Vi and formulas...

  20. Constraining processes of landscape change with combined in situ cosmogenic 14C-10Be analysis

    Science.gov (United States)

    Hippe, Kristina

    2017-10-01

    Reconstructing Quaternary landscape evolution today frequently builds upon cosmogenic-nuclide surface exposure dating. However, the study of complex surface exposure chronologies on the 102-104 years' timescale remains challenging with the commonly used long-lived radionuclides (10Be, 26Al, 36Cl). In glacial settings, key points are the inheritance of nuclides accumulated in a rock surface during a previous exposure episode and (partial) shielding of a rock surface after the main deglaciation event, e.g. during phases of glacier readvance. Combining the short-lived in situ cosmogenic 14C isotope with 10Be dating provides a valuable approach to resolve and quantify complex exposure histories and burial episodes within Lateglacial and Holocene timescales. The first studies applying the in situ14C-10Be pair have demonstrated the great benefit from in situ14C analysis for unravelling complex glacier chronologies in various glacial environments worldwide. Moreover, emerging research on in situ14C in sedimentary systems highlights the capacity of combined in situ14C-10Be analysis to quantify sediment transfer times in fluvial catchments or to constrain changes in surface erosion rates. Nevertheless, further methodological advances are needed to obtain truly routine and widely available in situ14C analysis. Future development in analytical techniques has to focus on improving the analytical reproducibility, reducing the background level and determining more accurate muonic production rates. These improvements should allow extending the field of applications for combined in situ14C-10Be analysis in Earth surface sciences and open up a number of promising applications for dating young sedimentary deposits and the quantification of recent changes in surface erosion dynamics.

  1. Free and constrained symplectic integrators for numerical general relativity

    International Nuclear Information System (INIS)

    Richter, Ronny; Lubich, Christian

    2008-01-01

    We consider symplectic time integrators in numerical general relativity and discuss both free and constrained evolution schemes. For free evolution of ADM-like equations we propose the use of the Stoermer-Verlet method, a standard symplectic integrator which here is explicit in the computationally expensive curvature terms. For the constrained evolution we give a formulation of the evolution equations that enforces the momentum constraints in a holonomically constrained Hamiltonian system and turns the Hamilton constraint function from a weak to a strong invariant of the system. This formulation permits the use of the constraint-preserving symplectic RATTLE integrator, a constrained version of the Stoermer-Verlet method. The behavior of the methods is illustrated on two effectively (1+1)-dimensional versions of Einstein's equations, which allow us to investigate a perturbed Minkowski problem and the Schwarzschild spacetime. We compare symplectic and non-symplectic integrators for free evolution, showing very different numerical behavior for nearly-conserved quantities in the perturbed Minkowski problem. Further we compare free and constrained evolution, demonstrating in our examples that enforcing the momentum constraints can turn an unstable free evolution into a stable constrained evolution. This is demonstrated in the stabilization of a perturbed Minkowski problem with Dirac gauge, and in the suppression of the propagation of boundary instabilities into the interior of the domain in Schwarzschild spacetime

  2. Constraining dark sector perturbations I: cosmic shear and CMB lensing

    International Nuclear Information System (INIS)

    Battye, Richard A.; Moss, Adam; Pearson, Jonathan A.

    2015-01-01

    We present current and future constraints on equations of state for dark sector perturbations. The equations of state considered are those corresponding to a generalized scalar field model and time-diffeomorphism invariant L(g) theories that are equivalent to models of a relativistic elastic medium and also Lorentz violating massive gravity. We develop a theoretical understanding of the observable impact of these models. In order to constrain these models we use CMB temperature data from Planck, BAO measurements, CMB lensing data from Planck and the South Pole Telescope, and weak galaxy lensing data from CFHTLenS. We find non-trivial exclusions on the range of parameters, although the data remains compatible with w=−1. We gauge how future experiments will help to constrain the parameters. This is done via a likelihood analysis for CMB experiments such as CoRE and PRISM, and tomographic galaxy weak lensing surveys, focussing in on the potential discriminatory power of Euclid on mildly non-linear scales

  3. Constraining dark sector perturbations I: cosmic shear and CMB lensing

    Science.gov (United States)

    Battye, Richard A.; Moss, Adam; Pearson, Jonathan A.

    2015-04-01

    We present current and future constraints on equations of state for dark sector perturbations. The equations of state considered are those corresponding to a generalized scalar field model and time-diffeomorphism invariant Script L(g) theories that are equivalent to models of a relativistic elastic medium and also Lorentz violating massive gravity. We develop a theoretical understanding of the observable impact of these models. In order to constrain these models we use CMB temperature data from Planck, BAO measurements, CMB lensing data from Planck and the South Pole Telescope, and weak galaxy lensing data from CFHTLenS. We find non-trivial exclusions on the range of parameters, although the data remains compatible with w=-1. We gauge how future experiments will help to constrain the parameters. This is done via a likelihood analysis for CMB experiments such as CoRE and PRISM, and tomographic galaxy weak lensing surveys, focussing in on the potential discriminatory power of Euclid on mildly non-linear scales.

  4. Dark matter scenarios in a constrained model with Dirac gauginos

    CERN Document Server

    Goodsell, Mark D.; Müller, Tobias; Porod, Werner; Staub, Florian

    2015-01-01

    We perform the first analysis of Dark Matter scenarios in a constrained model with Dirac Gauginos. The model under investigation is the Constrained Minimal Dirac Gaugino Supersymmetric Standard model (CMDGSSM) where the Majorana mass terms of gauginos vanish. However, $R$-symmetry is broken in the Higgs sector by an explicit and/or effective $B_\\mu$-term. This causes a mass splitting between Dirac states in the fermion sector and the neutralinos, which provide the dark matter candidate, become pseudo-Dirac states. We discuss two scenarios: the universal case with all scalar masses unified at the GUT scale, and the case with non-universal Higgs soft-terms. We identify different regions in the parameter space which fullfil all constraints from the dark matter abundance, the limits from SUSY and direct dark matter searches and the Higgs mass. Most of these points can be tested with the next generation of direct dark matter detection experiments.

  5. I/O-Efficient Construction of Constrained Delaunay Triangulations

    DEFF Research Database (Denmark)

    Agarwal, Pankaj Kumar; Arge, Lars; Yi, Ke

    2005-01-01

    In this paper, we designed and implemented an I/O-efficient algorithm for constructing constrained Delaunay triangulations. If the number of constraining segments is smaller than the memory size, our algorithm runs in expected O( N B logM/B NB ) I/Os for triangulating N points in the plane, where...

  6. Constrained Vapor Bubble Experiment

    Science.gov (United States)

    Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi

    2002-11-01

    Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.

  7. Multiplicative algorithms for constrained non-negative matrix factorization

    KAUST Repository

    Peng, Chengbin; Wong, Kachun; Rockwood, Alyn; Zhang, Xiangliang; Jiang, Jinling; Keyes, David E.

    2012-01-01

    Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc

  8. Constrained noninformative priors

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-10-01

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given

  9. Order-Constrained Solutions in K-Means Clustering: Even Better than Being Globally Optimal

    Science.gov (United States)

    Steinley, Douglas; Hubert, Lawrence

    2008-01-01

    This paper proposes an order-constrained K-means cluster analysis strategy, and implements that strategy through an auxiliary quadratic assignment optimization heuristic that identifies an initial object order. A subsequent dynamic programming recursion is applied to optimally subdivide the object set subject to the order constraint. We show that…

  10. Constraining mass-diameter relations from hydrometeor images and cloud radar reflectivities in tropical continental and oceanic convective anvils

    Science.gov (United States)

    Fontaine, E.; Schwarzenboeck, A.; Delanoë, J.; Wobrock, W.; Leroy, D.; Dupuy, R.; Gourbeyre, C.; Protat, A.

    2014-10-01

    In this study the density of ice hydrometeors in tropical clouds is derived from a combined analysis of particle images from 2-D-array probes and associated reflectivities measured with a Doppler cloud radar on the same research aircraft. Usually, the mass-diameter m(D) relationship is formulated as a power law with two unknown coefficients (pre-factor, exponent) that need to be constrained from complementary information on hydrometeors, where absolute ice density measurement methods do not apply. Here, at first an extended theoretical study of numerous hydrometeor shapes simulated in 3-D and arbitrarily projected on a 2-D plan allowed to constrain the exponent βof the m(D) relationship from the exponent σ of the surface-diameterS(D)relationship, which is likewise written as a power law. Since S(D) always can be determined for real data from 2-D optical array probes or other particle imagers, the evolution of the m(D) exponent can be calculated. After that, the pre-factor α of m(D) is constrained from theoretical simulations of the radar reflectivities matching the measured reflectivities along the aircraft trajectory. The study was performed as part of the Megha-Tropiques satellite project, where two types of mesoscale convective systems (MCS) were investigated: (i) above the African continent and (ii) above the Indian Ocean. For the two data sets, two parameterizations are derived to calculate the vertical variability of m(D) coefficients α and β as a function of the temperature. Originally calculated (with T-matrix) and also subsequently parameterized m(D) relationships from this study are compared to other methods (from literature) of calculating m(D) in tropical convection. The significant benefit of using variable m(D) relations instead of a single m(D) relationship is demonstrated from the impact of all these m(D) relations on Z-CWC (Condensed Water Content) and Z-CWC-T-fitted parameterizations.

  11. Exploratory Bi-Factor Analysis: The Oblique Case

    Science.gov (United States)

    Jennrich, Robert I.; Bentler, Peter M.

    2012-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford ("Psychometrika" 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler ("Psychometrika" 76:537-549, 2011) introduced an exploratory form of bi-factor…

  12. Government control or low carbon lifestyle? – Analysis and application of a novel selective-constrained energy-saving and emission-reduction dynamic evolution system

    International Nuclear Information System (INIS)

    Fang, Guochang; Tian, Lixin; Fu, Min; Sun, Mei

    2014-01-01

    This paper explores a novel selective-constrained energy-saving and emission-reduction (ESER) dynamic evolution system, analyzing the impact of cost of conserved energy (CCE), government control, low carbon lifestyle and investment in new technology of ESER on energy intensity and economic growth. Based on artificial neural network, the quantitative coefficients of the actual system are identified. Taking the real situation in China for instance, an empirical study is undertaken by adjusting the parameters of the actual system. The dynamic evolution behavior of energy intensity and economic growth in reality are observed, with the results in perfect agreement with actual situation. The research shows that the introduction of CCE into ESER system will have certain restrictive effect on energy intensity in the earlier period. However, with the further development of the actual system, carbon emissions could be better controlled and energy intensity would decline. In the long run, the impacts of CCE on economic growth are positive. Government control and low carbon lifestyle play a decisive role in controlling ESER system and declining energy intensity. But the influence of government control on economic growth should be considered at the same time and the controlling effect of low carbon lifestyle on energy intensity should be strengthened gradually, while the investment in new technology of ESER can be neglected. Two different cases of ESER are proposed after a comprehensive analysis. The relations between variables and constraint conditions in the ESER system are harmonized remarkably. A better solution to carry out ESER is put forward at last, with numerical simulations being carried out to demonstrate the results. - Highlights: • Use of nonlinear dynamical method to model the selective-constrained ESER system. • Monotonic evolution curves of energy intensity and economic growth are obtained. • Detailed analysis of the game between government control and low

  13. Constrained least squares regularization in PET

    International Nuclear Information System (INIS)

    Choudhury, K.R.; O'Sullivan, F.O.

    1996-01-01

    Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort

  14. Constraining parameters of white-dwarf binaries using gravitational-wave and electromagnetic observations

    International Nuclear Information System (INIS)

    Shah, Sweta; Nelemans, Gijs

    2014-01-01

    The space-based gravitational wave (GW) detector, evolved Laser Interferometer Space Antenna (eLISA) is expected to observe millions of compact Galactic binaries that populate our Milky Way. GW measurements obtained from the eLISA detector are in many cases complimentary to possible electromagnetic (EM) data. In our previous papers, we have shown that the EM data can significantly enhance our knowledge of the astrophysically relevant GW parameters of Galactic binaries, such as the amplitude and inclination. This is possible due to the presence of some strong correlations between GW parameters that are measurable by both EM and GW observations, for example, the inclination and sky position. In this paper, we quantify the constraints in the physical parameters of the white-dwarf binaries, i.e., the individual masses, chirp mass, and the distance to the source that can be obtained by combining the full set of EM measurements such as the inclination, radial velocities, distances, and/or individual masses with the GW measurements. We find the following 2σ fractional uncertainties in the parameters of interest. The EM observations of distance constrain the chirp mass to ∼15%-25%, whereas EM data of a single-lined spectroscopic binary constrain the secondary mass and the distance with factors of two to ∼40%. The single-line spectroscopic data complemented with distance constrains the secondary mass to ∼25%-30%. Finally, EM data on double-lined spectroscopic binary constrain the distance to ∼30%. All of these constraints depend on the inclination and the signal strength of the binary systems. We also find that the EM information on distance and/or the radial velocity are the most useful in improving the estimate of the secondary mass, inclination, and/or distance.

  15. Active constrained layer damping of geometrically nonlinear vibrations of functionally graded plates using piezoelectric fiber-reinforced composites

    International Nuclear Information System (INIS)

    Panda, Satyajit; Ray, M C

    2008-01-01

    In this paper, a geometrically nonlinear dynamic analysis has been presented for functionally graded (FG) plates integrated with a patch of active constrained layer damping (ACLD) treatment and subjected to a temperature field. The constraining layer of the ACLD treatment is considered to be made of the piezoelectric fiber-reinforced composite (PFRC) material. The temperature field is assumed to be spatially uniform over the substrate plate surfaces and varied through the thickness of the host FG plates. The temperature-dependent material properties of the FG substrate plates are assumed to be graded in the thickness direction of the plates according to a power-law distribution while the Poisson's ratio is assumed to be a constant over the domain of the plate. The constrained viscoelastic layer of the ACLD treatment is modeled using the Golla–Hughes–McTavish (GHM) method. Based on the first-order shear deformation theory, a three-dimensional finite element model has been developed to model the open-loop and closed-loop nonlinear dynamics of the overall FG substrate plates under the thermal environment. The analysis suggests the potential use of the ACLD treatment with its constraining layer made of the PFRC material for active control of geometrically nonlinear vibrations of FG plates in the absence or the presence of the temperature gradient across the thickness of the plates. It is found that the ACLD treatment is more effective in controlling the geometrically nonlinear vibrations of FG plates than in controlling their linear vibrations. The analysis also reveals that the ACLD patch is more effective for controlling the nonlinear vibrations of FG plates when it is attached to the softest surface of the FG plates than when it is bonded to the stiffest surface of the plates. The effect of piezoelectric fiber orientation in the active constraining PFRC layer on the damping characteristics of the overall FG plates is also discussed

  16. Active constrained layer damping of geometrically nonlinear vibrations of functionally graded plates using piezoelectric fiber-reinforced composites

    Science.gov (United States)

    Panda, Satyajit; Ray, M. C.

    2008-04-01

    In this paper, a geometrically nonlinear dynamic analysis has been presented for functionally graded (FG) plates integrated with a patch of active constrained layer damping (ACLD) treatment and subjected to a temperature field. The constraining layer of the ACLD treatment is considered to be made of the piezoelectric fiber-reinforced composite (PFRC) material. The temperature field is assumed to be spatially uniform over the substrate plate surfaces and varied through the thickness of the host FG plates. The temperature-dependent material properties of the FG substrate plates are assumed to be graded in the thickness direction of the plates according to a power-law distribution while the Poisson's ratio is assumed to be a constant over the domain of the plate. The constrained viscoelastic layer of the ACLD treatment is modeled using the Golla-Hughes-McTavish (GHM) method. Based on the first-order shear deformation theory, a three-dimensional finite element model has been developed to model the open-loop and closed-loop nonlinear dynamics of the overall FG substrate plates under the thermal environment. The analysis suggests the potential use of the ACLD treatment with its constraining layer made of the PFRC material for active control of geometrically nonlinear vibrations of FG plates in the absence or the presence of the temperature gradient across the thickness of the plates. It is found that the ACLD treatment is more effective in controlling the geometrically nonlinear vibrations of FG plates than in controlling their linear vibrations. The analysis also reveals that the ACLD patch is more effective for controlling the nonlinear vibrations of FG plates when it is attached to the softest surface of the FG plates than when it is bonded to the stiffest surface of the plates. The effect of piezoelectric fiber orientation in the active constraining PFRC layer on the damping characteristics of the overall FG plates is also discussed.

  17. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    Science.gov (United States)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  18. A Time-constrained Network Voronoi Construction and Accessibility Analysis in Location-based Service Technology

    Science.gov (United States)

    Yu, W.; Ai, T.

    2014-11-01

    Accessibility analysis usually requires special models of spatial location analysis based on some geometric constructions, such as Voronoi diagram (abbreviated to VD). There are many achievements in classic Voronoi model research, however suffering from the following limitations for location-based services (LBS) applications. (1) It is difficult to objectively reflect the actual service areas of facilities by using traditional planar VDs, because human activities in LBS are usually constrained only to the network portion of the planar space. (2) Although some researchers have adopted network distance to construct VDs, their approaches are used in a static environment, where unrealistic measures of shortest path distance based on assumptions about constant travel speeds through the network were often used. (3) Due to the computational complexity of the shortest-path distance calculating, previous researches tend to be very time consuming, especially for large datasets and if multiple runs are required. To solve the above problems, a novel algorithm is developed in this paper. We apply network-based quadrat system and 1-D sequential expansion to find the corresponding subnetwork for each focus. The idea is inspired by the natural phenomenon that water flow extends along certain linear channels until meets others or arrives at the end of route. In order to accommodate the changes in traffic conditions, the length of network-quadrat is set upon the traffic condition of the corresponding street. The method has the advantage over Dijkstra's algorithm in that the time cost is avoided, and replaced with a linear time operation.

  19. Lexical mediation of phonotactic frequency effects on spoken word recognition: A Granger causality analysis of MRI-constrained MEG/EEG data.

    Science.gov (United States)

    Gow, David W; Olson, Bruna B

    2015-07-01

    Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical "gang effects" in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account.

  20. Trends in PDE constrained optimization

    CERN Document Server

    Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan

    2014-01-01

    Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics.   The book is divided into five sections on “Constrained Optimization, Identification and Control”...

  1. Constrained superfields in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-02-16

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  2. Evaluation of HOx sources and cycling using measurement-constrained model calculations in a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated ecosystem

    Directory of Open Access Journals (Sweden)

    S. B. Henry

    2013-02-01

    Full Text Available We present a detailed analysis of OH observations from the BEACHON (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-ROCS (Rocky Mountain Organic Carbon Study 2010 field campaign at the Manitou Forest Observatory (MFO, which is a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated forest environment. A comprehensive suite of measurements was used to constrain primary production of OH via ozone photolysis, OH recycling from HO2, and OH chemical loss rates, in order to estimate the steady-state concentration of OH. In addition, the University of Washington Chemical Model (UWCM was used to evaluate the performance of a near-explicit chemical mechanism. The diurnal cycle in OH from the steady-state calculations is in good agreement with measurement. A comparison between the photolytic production rates and the recycling rates from the HO2 + NO reaction shows that recycling rates are ~20 times faster than the photolytic OH production rates from ozone. Thus, we find that direct measurement of the recycling rates and the OH loss rates can provide accurate predictions of OH concentrations. More importantly, we also conclude that a conventional OH recycling pathway (HO2 + NO can explain the observed OH levels in this non-isoprene environment. This is in contrast to observations in isoprene-dominated regions, where investigators have observed significant underestimation of OH and have speculated that unknown sources of OH are responsible. The highly-constrained UWCM calculation under-predicts observed HO2 by as much as a factor of 8. As HO2 maintains oxidation capacity by recycling to OH, UWCM underestimates observed OH by as much as a factor of 4. When the UWCM calculation is constrained by measured HO2, model calculated OH is in better agreement with the observed OH levels. Conversely, constraining the model to observed OH only slightly reduces the model-measurement HO2 discrepancy, implying unknown HO2

  3. Cross-constrained problems for nonlinear Schrodinger equation with harmonic potential

    Directory of Open Access Journals (Sweden)

    Runzhang Xu

    2012-11-01

    Full Text Available This article studies a nonlinear Schodinger equation with harmonic potential by constructing different cross-constrained problems. By comparing the different cross-constrained problems, we derive different sharp criterion and different invariant manifolds that separate the global solutions and blowup solutions. Moreover, we conclude that some manifolds are empty due to the essence of the cross-constrained problems. Besides, we compare the three cross-constrained problems and the three depths of the potential wells. In this way, we explain the gaps in [J. Shu and J. Zhang, Nonlinear Shrodinger equation with harmonic potential, Journal of Mathematical Physics, 47, 063503 (2006], which was pointed out in [R. Xu and Y. Liu, Remarks on nonlinear Schrodinger equation with harmonic potential, Journal of Mathematical Physics, 49, 043512 (2008].

  4. Energy Security Analysis: The case of constrained oil supply for Ireland

    International Nuclear Information System (INIS)

    Glynn, James; Chiodi, Alessandro; Gargiulo, Maurizio; Deane, J.P.; Bazilian, Morgan; Gallachóir, Brian Ó

    2014-01-01

    Ireland imports 88% of its energy requirements. Oil makes up 59% of total final energy consumption (TFC). Import dependency, low fuel diversity and volatile prices leave Ireland vulnerable in terms of energy security. This work models energy security scenarios for Ireland using long term macroeconomic forecasts to 2050, with oil production and price scenarios from the International Monetary Fund, within the Irish TIMES energy systems model. The analysis focuses on developing a least cost optimum energy system for Ireland under scenarios of constrained oil supply (0.8% annual import growth, and –2% annual import decline) and subsequent sustained long term price shocks to oil and gas imports. The results point to gas becoming the dominant fuel source for Ireland, at 54% total final energy consumption in 2020, supplanting oil from reference projections of 57% to 10.8% TFC. In 2012, the cost of net oil imports stood at €3.6 billion (2.26% GDP). The modelled high oil and gas price scenarios show an additional annual cost in comparison to a reference of between €2.9bn and €7.5bn by 2020 (1.9–4.9% of GDP) to choose to develop a least cost energy system. Investment and ramifications for energy security are discussed. - Highlights: • We investigate energy security within a techno-economic model of Ireland to 2050. • We impose scenarios constraints of volume and price derived from IMF forecasting. • Continued high oil prices lead to natural gas supplanting oil at 54% TFC by 2020. • Declining oil production induces additional energy system costs of 7.9% GDP by 2020. • High oil and gas prices are likely to strain existing Irish gas import infrastructure

  5. In vitro transcription of a torsionally constrained template

    DEFF Research Database (Denmark)

    Bentin, Thomas; Nielsen, Peter E

    2002-01-01

    RNA polymerase (RNAP) and the DNA template must rotate relative to each other during transcription elongation. In the cell, however, the components of the transcription apparatus may be subject to rotary constraints. For instance, the DNA is divided into topological domains that are delineated...... of torsionally constrained DNA by free RNAP. We asked whether or not a newly synthesized RNA chain would limit transcription elongation. For this purpose we developed a method to immobilize covalently closed circular DNA to streptavidin-coated beads via a peptide nucleic acid (PNA)-biotin conjugate in principle...... constrained. We conclude that transcription of a natural bacterial gene may proceed with high efficiency despite the fact that newly synthesized RNA is entangled around the template in the narrow confines of torsionally constrained supercoiled DNA....

  6. Terrestrial Sagnac delay constraining modified gravity models

    Science.gov (United States)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  7. What Enables and Constrains the Inclusion of the Social Determinants of Health Inequities in Government Policy Agendas? A Narrative Review

    Directory of Open Access Journals (Sweden)

    Phillip Baker

    2018-02-01

    Full Text Available Background Despite decades of evidence gathering and calls for action, few countries have systematically attenuated health inequities (HI through action on the social determinants of health (SDH. This is at least partly because doing so presents a significant political and policy challenge. This paper explores this challenge through a review of the empirical literature, asking: what factors have enabled and constrained the inclusion of the social determinants of health inequities (SDHI in government policy agendas? Methods A narrative review method was adopted involving three steps: first, drawing upon political science theories on agenda-setting, an integrated theoretical framework was developed to guide the review; second, a systematic search of scholarly databases for relevant literature; and third, qualitative analysis of the data and thematic synthesis of the results. Studies were included if they were empirical, met specified quality criteria, and identified factors that enabled or constrained the inclusion of the SDHI in government policy agendas. Results A total of 48 studies were included in the final synthesis, with studies spanning a number of country-contexts and jurisdictional settings, and employing a diversity of theoretical frameworks. Influential factors included the ways in which the SDHI were framed in public, media and political discourse; emerging data and evidence describing health inequalities; limited supporting evidence and misalignment of proposed solutions with existing policy and institutional arrangements; institutionalised norms and ideologies (ie, belief systems that are antithetical to a SDH approach including neoliberalism, the medicalisation of health and racism; civil society mobilization; leadership; and changes in government. Conclusion A complex set of interrelated, context-dependent and dynamic factors influence the inclusion or neglect of the SDHI in government policy agendas. It is better to think about

  8. What Enables and Constrains the Inclusion of the Social Determinants of Health Inequities in Government Policy Agendas? A Narrative Review

    Science.gov (United States)

    Baker, Phillip; Friel, Sharon; Kay, Adrian; Baum, Fran; Strazdins, Lyndall; Mackean, Tamara

    2018-01-01

    Background: Despite decades of evidence gathering and calls for action, few countries have systematically attenuated health inequities (HI) through action on the social determinants of health (SDH). This is at least partly because doing so presents a significant political and policy challenge. This paper explores this challenge through a review of the empirical literature, asking: what factors have enabled and constrained the inclusion of the social determinants of health inequities (SDHI) in government policy agendas? Methods: A narrative review method was adopted involving three steps: first, drawing upon political science theories on agenda-setting, an integrated theoretical framework was developed to guide the review; second, a systematic search of scholarly databases for relevant literature; and third, qualitative analysis of the data and thematic synthesis of the results. Studies were included if they were empirical, met specified quality criteria, and identified factors that enabled or constrained the inclusion of the SDHI in government policy agendas. Results: A total of 48 studies were included in the final synthesis, with studies spanning a number of country-contexts and jurisdictional settings, and employing a diversity of theoretical frameworks. Influential factors included the ways in which the SDHI were framed in public, media and political discourse; emerging data and evidence describing health inequalities; limited supporting evidence and misalignment of proposed solutions with existing policy and institutional arrangements; institutionalised norms and ideologies (ie, belief systems) that are antithetical to a SDH approach including neoliberalism, the medicalisation of health and racism; civil society mobilization; leadership; and changes in government. Conclusion: A complex set of interrelated, context-dependent and dynamic factors influence the inclusion or neglect of the SDHI in government policy agendas. It is better to think about these factors

  9. Characterization of constrained aged Ni Ti strips for using in artificial muscle actuators

    International Nuclear Information System (INIS)

    Hassanzadeh Nemati, N.; Sadrnezhaad, S. K.

    2011-01-01

    Marvelous bending/straightening effects of two-way shape memory alloy help their employment in design and manufacturing of new medical appliances. Constrained ageing with bending load scheme can induce two-way shape memory effect. Scanning electron microscopic analysis, electrical resistivity measurement and differential scanning calorimetry are employed to determine the property change due to flat strip constrained aging. Results show that flat-annealing prior to the aging shifts Ni Ti transformations temperatures to higher values. Super elastic behavior of the as-received/flat-annealed/aged samples with more adequate transition temperatures due to biological tissue replacement is studied by three-point flexural tests. Results show that curing changes the transition points of the Ni Ti strips. These changes affect the shape memory behavior of the Ni Ti strips embedded within the biocompatible flexible composite segments.

  10. Topology Optimization of Constrained Layer Damping on Plates Using Method of Moving Asymptote (MMA Approach

    Directory of Open Access Journals (Sweden)

    Zheng Ling

    2011-01-01

    Full Text Available Damping treatments have been extensively used as a powerful means to damp out structural resonant vibrations. Usually, damping materials are fully covered on the surface of plates. The drawbacks of this conventional treatment are also obvious due to an added mass and excess material consumption. Therefore, it is not always economical and effective from an optimization design view. In this paper, a topology optimization approach is presented to maximize the modal damping ratio of the plate with constrained layer damping treatment. The governing equation of motion of the plate is derived on the basis of energy approach. A finite element model to describe dynamic performances of the plate is developed and used along with an optimization algorithm in order to determine the optimal topologies of constrained layer damping layout on the plate. The damping of visco-elastic layer is modeled by the complex modulus formula. Considering the vibration and energy dissipation mode of the plate with constrained layer damping treatment, damping material density and volume factor are considered as design variable and constraint respectively. Meantime, the modal damping ratio of the plate is assigned as the objective function in the topology optimization approach. The sensitivity of modal damping ratio to design variable is further derived and Method of Moving Asymptote (MMA is adopted to search the optimized topologies of constrained layer damping layout on the plate. Numerical examples are used to demonstrate the effectiveness of the proposed topology optimization approach. The results show that vibration energy dissipation of the plates can be enhanced by the optimal constrained layer damping layout. This optimal technology can be further extended to vibration attenuation of sandwich cylindrical shells which constitute the major building block of many critical structures such as cabins of aircrafts, hulls of submarines and bodies of rockets and missiles as an

  11. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    Science.gov (United States)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  12. Constraining the location of rapid gamma-ray flares in the flat spectrum radio quasar 3C 273 [Constraining the location of rapid gamma-ray flares in the FSRQ 3C 273

    International Nuclear Information System (INIS)

    Rani, B.; Lott, B.; Krichbaum, T. P.; Fuhrmann, L.; Zensus, J. A.

    2013-01-01

    Here, we present a γ-ray photon flux and spectral variability study of the flat-spectrum radio quasar 3C 273 over a rapid flaring activity period between September 2009 to April 2010. Five major flares were observed in the source during this period. The most rapid flare observed in the source has a flux doubling time of 1.1 hr. The rapid γ-ray flares allow us to constrain the location and size of the γ-ray emission region in the source. The γγ-opacity constrains the Doppler factor δ_γ ≥ 10 for the highest energy (15 GeV) photon observed by the Fermi-Large Area Telescope (LAT). Causality arguments constrain the size of the emission region to 1.6 × 10"1"5 cm. The γ-ray spectra measured over this period show clear deviations from a simple power law with a break in the 1–2 GeV energy range. We discuss possible explanations for the origin of the γ-ray spectral breaks. Our study suggests that the γ-ray emission region in 3C 273 is located within the broad line region (< 1.6 pc). As a result, the spectral behavior and temporal characteristics of the individual flares indicate the presence of multiple shock scenarios at the base of the jet.

  13. Attenuation of artifacts in EEG signals measured inside an MRI scanner using constrained independent component analysis

    International Nuclear Information System (INIS)

    Rasheed, Tahir; Lee, Young-Koo; Lee, Soo Yeol; Kim, Tae-Seong

    2009-01-01

    Integration of electroencephalography (EEG) and functional magnetic imaging (fMRI) resonance will allow analysis of the brain activities at superior temporal and spatial resolution. However simultaneous acquisition of EEG and fMRI is hindered by the enhancement of artifacts in EEG, the most prominent of which are ballistocardiogram (BCG) and electro-oculogram (EOG) artifacts. The situation gets even worse if the evoked potentials are measured inside MRI for their minute responses in comparison to the spontaneous brain responses. In this study, we propose a new method of attenuating these artifacts from the spontaneous and evoked EEG data acquired inside an MRI scanner using constrained independent component analysis with a priori information about the artifacts as constraints. With the proposed techniques of reference function generation for the BCG and EOG artifacts as constraints, our new approach performs significantly better than the averaged artifact subtraction (AAS) method. The proposed method could be an alternative to the conventional ICA method for artifact attenuation, with some advantages. As a performance measure we have achieved much improved normalized power spectrum ratios (INPS) for continuous EEG and correlation coefficient (cc) values with outside MRI visual evoked potentials for visual evoked EEG, as compared to those obtained with the AAS method. The results show that our new approach is more effective than the conventional methods, almost fully automatic, and no extra ECG signal measurements are involved

  14. Constraining walking and custodial technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; Sannino, Francesco

    2008-01-01

    We show how to constrain the physical spectrum of walking technicolor models via precision measurements and modified Weinberg sum rules. We also study models possessing a custodial symmetry for the S parameter at the effective Lagrangian level-custodial technicolor-and argue that these models...

  15. 21 CFR 888.3300 - Hip joint metal constrained cemented or uncemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal constrained cemented or uncemented... HUMAN SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3300 Hip joint metal constrained cemented or uncemented prosthesis. (a) Identification. A hip joint metal constrained...

  16. Coding for Two Dimensional Constrained Fields

    DEFF Research Database (Denmark)

    Laursen, Torben Vaarbye

    2006-01-01

    a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....

  17. Q-deformed systems and constrained dynamics

    International Nuclear Information System (INIS)

    Shabanov, S.V.

    1993-01-01

    It is shown that quantum theories of the q-deformed harmonic oscillator and one-dimensional free q-particle (a free particle on the 'quantum' line) can be obtained by the canonical quantization of classical Hamiltonian systems with commutative phase-space variables and a non-trivial symplectic structure. In the framework of this approach, classical dynamics of a particle on the q-line coincides with the one of a free particle with friction. It is argued that q-deformed systems can be treated as ordinary mechanical systems with the second-class constraints. In particular, second-class constrained systems corresponding to the q-oscillator and q-particle are given. A possibility of formulating q-deformed systems via gauge theories (first-class constrained systems) is briefly discussed. (orig.)

  18. The Infinitesimal Jackknife with Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  19. 21 CFR 888.3110 - Ankle joint metal/polymer semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ankle joint metal/polymer semi-constrained... Ankle joint metal/polymer semi-constrained cemented prosthesis. (a) Identification. An ankle joint metal/polymer semi-constrained cemented prosthesis is a device intended to be implanted to replace an ankle...

  20. Left ventricular wall motion abnormalities evaluated by factor analysis as compared with Fourier analysis

    International Nuclear Information System (INIS)

    Hirota, Kazuyoshi; Ikuno, Yoshiyasu; Nishikimi, Toshio

    1986-01-01

    Factor analysis was applied to multigated cardiac pool scintigraphy to evaluate its ability to detect left ventricular wall motion abnormalities in 35 patients with old myocardial infarction (MI), and in 12 control cases with normal left ventriculography. All cases were also evaluated by conventional Fourier analysis. In most cases with normal left ventriculography, the ventricular and atrial factors were extracted by factor analysis. In cases with MI, the third factor was obtained in the left ventricle corresponding to wall motion abnormality. Each case was scored according to the coincidence of findings of ventriculography and those of factor analysis or Fourier analysis. Scores were recorded for three items; the existence, location, and degree of asynergy. In cases of MI, the detection rate of asynergy was 94 % by factor analysis, 83 % by Fourier analysis, and the agreement in respect to location was 71 % and 66 %, respectively. Factor analysis had higher scores than Fourier analysis, but this was not significant. The interobserver error of factor analysis was less than that of Fourier analysis. Factor analysis can display locations and dynamic motion curves of asynergy, and it is regarded as a useful method for detecting and evaluating left ventricular wall motion abnormalities. (author)

  1. Exploratory factor analysis in Rehabilitation Psychology: a content analysis.

    Science.gov (United States)

    Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N

    2014-11-01

    Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.

  2. Constrained Local UniversE Simulations: a Local Group factory

    Science.gov (United States)

    Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias

    2016-05-01

    Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.

  3. Lithuanian Population Aging Factors Analysis

    Directory of Open Access Journals (Sweden)

    Agnė Garlauskaitė

    2015-05-01

    Full Text Available The aim of this article is to identify the factors that determine aging of Lithuania’s population and to assess the influence of these factors. The article shows Lithuanian population aging factors analysis, which consists of two main parts: the first describes the aging of the population and its characteristics in theoretical terms. Second part is dedicated to the assessment of trends that influence the aging population and demographic factors and also to analyse the determinants of the aging of the population of Lithuania. After analysis it is concluded in the article that the decline in the birth rate and increase in the number of emigrants compared to immigrants have the greatest impact on aging of the population, so in order to show the aging of the population, a lot of attention should be paid to management of these demographic processes.

  4. The Perception of Dynamic and Static Facial Expressions of Happiness and Disgust Investigated by ERPs and fMRI Constrained Source Analysis

    Science.gov (United States)

    Trautmann-Lengsfeld, Sina Alexa; Domínguez-Borràs, Judith; Escera, Carles; Herrmann, Manfred; Fehr, Thorsten

    2013-01-01

    A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach. PMID:23818974

  5. Structural and mechanical behaviour of severe plastically deformed high purity aluminium sheets processed by constrained groove pressing technique

    International Nuclear Information System (INIS)

    Satheesh Kumar, S.S.; Raghu, T.

    2014-01-01

    Highlights: • High purity aluminium sheets constrained groove pressed up to plastic strain of 5.8. • Microstructural evolution studied by TEM and X-ray diffraction profile analysis. • Ultrafine grained structure with grain size ∼900 nm achieved in sheets. • Yield strength increased by 5.3 times and tensile strength doubled after first pass. • Enhanced deformation homogeneity seen with increased accumulated plastic strain. - Abstract: High purity aluminium sheets (∼99.9%) are subjected to intense plastic straining by constrained groove pressing method successfully up to 5 passes thereby imparting an effective plastic strain of 5.8. Transmission electron microscopy studies of constrained groove pressed sheets divulged significant grain refinement and the average grain sizes obtained after five pass is estimated to be ∼0.9 μm. In addition to that, microstructural evolution of constrained groove pressed sheets is characterized by X-ray diffraction peak profile analysis employing Williamson–Hall method and the results obtained fairly concur with electron microscopy findings. The tensile behaviour evolution with increased straining indicates substantial improvement of yield strength by ∼5.3 times from 17 MPa to 90 MPa during first pass corroborated to grain refinement observed. Marginal increase in strengths is noticed during second pass followed by minor drop in strengths attributed to predominance of dislocation recovery is noticed in subsequent passes. Quantitative assessment of degree of deformation homogeneity using microhardness profiles reveal relatively better strain homogeneity at higher number of passes

  6. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Andrea Trucco

    2015-06-01

    Full Text Available For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed. In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.

  7. Factor Economic Analysis at Forestry Enterprises

    Directory of Open Access Journals (Sweden)

    M.Yu. Chik

    2018-03-01

    Full Text Available The article studies the importance of economic analysis according to the results of research of scientific works of domestic and foreign scientists. The calculation of the influence of factors on the change in the cost of harvesting timber products by cost items has been performed. The results of the calculation of the influence of factors on the change of costs on 1 UAH are determined using the full cost of sold products. The variable and fixed costs and their distribution are allocated that influences the calculation of the impact of factors on cost changes on 1 UAH of sold products. The paper singles out the general results of calculating the influence of factors on cost changes on 1 UAH of sold products. According to the results of the analysis, the list of reserves for reducing the cost of production at forest enterprises was proposed. The main sources of reserves for reducing the prime cost of forest products at forest enterprises are investigated based on the conducted factor analysis.

  8. Volume-constrained optimization of magnetorheological and electrorheological valves and dampers

    Science.gov (United States)

    Rosenfeld, Nicholas C.; Wereley, Norman M.

    2004-12-01

    This paper presents a case study of magnetorheological (MR) and electrorheological (ER) valve design within a constrained cylindrical volume. The primary purpose of this study is to establish general design guidelines for volume-constrained MR valves. Additionally, this study compares the performance of volume-constrained MR valves against similarly constrained ER valves. Starting from basic design guidelines for an MR valve, a method for constructing candidate volume-constrained valve geometries is presented. A magnetic FEM program is then used to evaluate the magnetic properties of the candidate valves. An optimized MR valve is chosen by evaluating non-dimensional parameters describing the candidate valves' damping performance. A derivation of the non-dimensional damping coefficient for valves with both active and passive volumes is presented to allow comparison of valves with differing proportions of active and passive volumes. The performance of the optimized MR valve is then compared to that of a geometrically similar ER valve using both analytical and numerical techniques. An analytical equation relating the damping performances of geometrically similar MR and ER valves in as a function of fluid yield stresses and relative active fluid volume, and numerical calculations are provided to calculate each valve's damping performance and to validate the analytical calculations.

  9. Block-triangular preconditioners for PDE-constrained optimization

    KAUST Repository

    Rees, Tyrone

    2010-11-26

    In this paper we investigate the possibility of using a block-triangular preconditioner for saddle point problems arising in PDE-constrained optimization. In particular, we focus on a conjugate gradient-type method introduced by Bramble and Pasciak that uses self-adjointness of the preconditioned system in a non-standard inner product. We show when the Chebyshev semi-iteration is used as a preconditioner for the relevant matrix blocks involving the finite element mass matrix that the main drawback of the Bramble-Pasciak method-the appropriate scaling of the preconditioners-is easily overcome. We present an eigenvalue analysis for the block-triangular preconditioners that gives convergence bounds in the non-standard inner product and illustrates their competitiveness on a number of computed examples. Copyright © 2010 John Wiley & Sons, Ltd.

  10. Block-triangular preconditioners for PDE-constrained optimization

    KAUST Repository

    Rees, Tyrone; Stoll, Martin

    2010-01-01

    In this paper we investigate the possibility of using a block-triangular preconditioner for saddle point problems arising in PDE-constrained optimization. In particular, we focus on a conjugate gradient-type method introduced by Bramble and Pasciak that uses self-adjointness of the preconditioned system in a non-standard inner product. We show when the Chebyshev semi-iteration is used as a preconditioner for the relevant matrix blocks involving the finite element mass matrix that the main drawback of the Bramble-Pasciak method-the appropriate scaling of the preconditioners-is easily overcome. We present an eigenvalue analysis for the block-triangular preconditioners that gives convergence bounds in the non-standard inner product and illustrates their competitiveness on a number of computed examples. Copyright © 2010 John Wiley & Sons, Ltd.

  11. CA-Markov Analysis of Constrained Coastal Urban Growth Modeling: Hua Hin Seaside City, Thailand

    Directory of Open Access Journals (Sweden)

    Rajendra Shrestha

    2013-04-01

    Full Text Available Thailand, a developing country in Southeast Asia, is experiencing rapid development, particularly urban growth as a response to the expansion of the tourism industry. Hua Hin city provides an excellent example of an area where urbanization has flourished due to tourism. This study focuses on how the dynamic urban horizontal expansion of the seaside city of Hua Hin is constrained by the coast, thus making sustainability for this popular tourist destination—managing and planning for its local inhabitants, its visitors, and its sites—an issue. The study examines the association of land use type and land use change by integrating Geo-Information technology, a statistic model, and CA-Markov analysis for sustainable land use planning. The study identifies that the land use types and land use changes from the year 1999 to 2008 have changed as a result of increased mobility; this trend, in turn, has everything to do with urban horizontal expansion. The changing sequences of land use type have developed from forest area to agriculture, from agriculture to grassland, then to bare land and built-up areas. Coastal urban growth has, for a decade, been expanding horizontally from a downtown center along the beach to the western area around the golf course, the southern area along the beach, the southwest grassland area, and then the northern area near the airport.

  12. Stall Recovery Guidance Algorithms Based on Constrained Control Approaches

    Science.gov (United States)

    Stepanyan, Vahram; Krishnakumar, Kalmanje; Kaneshige, John; Acosta, Diana

    2016-01-01

    Aircraft loss-of-control, in particular approach to stall or fully developed stall, is a major factor contributing to aircraft safety risks, which emphasizes the need to develop algorithms that are capable of assisting the pilots to identify the problem and providing guidance to recover the aircraft. In this paper we present several stall recovery guidance algorithms, which are implemented in the background without interfering with flight control system and altering the pilot's actions. They are using input and state constrained control methods to generate guidance signals, which are provided to the pilot in the form of visual cues. It is the pilot's decision to follow these signals. The algorithms are validated in the pilot-in-the loop medium fidelity simulation experiment.

  13. A discretized algorithm for the solution of a constrained, continuous ...

    African Journals Online (AJOL)

    A discretized algorithm for the solution of a constrained, continuous quadratic control problem. ... The results obtained show that the Discretized constrained algorithm (DCA) is much more accurate and more efficient than some of these techniques, particularly the FSA. Journal of the Nigerian Association of Mathematical ...

  14. Constraining the shape of the CMB: A peak-by-peak analysis

    International Nuclear Information System (INIS)

    Oedman, Carolina J.; Hobson, Michael P.; Lasenby, Anthony N.; Melchiorri, Alessandro

    2003-01-01

    The recent measurements of the power spectrum of cosmic microwave background anisotropies are consistent with the simplest inflationary scenario and big bang nucleosynthesis constraints. However, these results rely on the assumption of a class of models based on primordial adiabatic perturbations, cold dark matter and a cosmological constant. In this paper we investigate the need for deviations from the Λ-CDM scenario by first characterizing the spectrum using a phenomenological function in a 15 dimensional parameter space. Using a Monte Carlo Markov chain approach to Bayesian inference and a low curvature model template we then check for the presence of new physics and/or systematics in the CMB data. We find an almost perfect consistency between the phenomenological fits and the standard Λ-CDM models. The curvature of the secondary peaks is weakly constrained by the present data, but they are well located. The improved spectral resolution expected from future satellite experiments is warranted for a definitive test of the scenario

  15. Constrained approximation of effective generators for multiscale stochastic reaction networks and application to conditioned path sampling

    Energy Technology Data Exchange (ETDEWEB)

    Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk

    2016-10-15

    Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allows us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.

  16. Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

    Directory of Open Access Journals (Sweden)

    Hailong Wang

    2018-01-01

    Full Text Available The backtracking search optimization algorithm (BSA is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed.

  17. X-ray Constrained Extremely Localized Molecular Orbitals: Theory and Critical Assessment of the New Technique.

    Science.gov (United States)

    Genoni, Alessandro

    2013-07-09

    Following the X-ray constrained wave function approach proposed by Jayatilaka, we have devised a new technique that allows to extract molecular orbitals strictly localized on small molecular fragments from sets of experimental X-ray structure factors amplitudes. Since the novel strategy enables to obtain electron distributions that have quantum mechanical features and that can be easily interpreted in terms of traditional chemical concepts, the method can be also considered as a new useful tool for the determination and the analysis of charge densities from high-resolution X-ray experiments. In this paper, we describe in detail the theory of the new technique, which, in comparison to our preliminary work, has been improved both treating the effects of isotropic secondary extinctions and introducing a new protocol to halt the fitting procedure against the experimental X-ray scattering data. The performances of the novel strategy have been studied both in function of the basis-sets flexibility and in function of the quality of the considered crystallographic data. The tests performed on four different systems (α-glycine, l-cysteine, (aminomethyl)phosphonic acid and N-(trifluoromethyl)formamide) have shown that the achievement of good statistical agreements with the experimental measures mainly depends on the quality of the crystal structures (i.e., geometry positions and thermal parameters) used in the X-ray constrained calculations. Finally, given the reliable transferability of the obtained Extremely Localized Molecular Orbitals (ELMOs), we envisage to exploit the novel approach to construct new ELMOs databases suited to the development of linear-scaling methods for the refinement of macromolecular crystal structures.

  18. First course in factor analysis

    CERN Document Server

    Comrey, Andrew L

    2013-01-01

    The goal of this book is to foster a basic understanding of factor analytic techniques so that readers can use them in their own research and critically evaluate their use by other researchers. Both the underlying theory and correct application are emphasized. The theory is presented through the mathematical basis of the most common factor analytic models and several methods used in factor analysis. On the application side, considerable attention is given to the extraction problem, the rotation problem, and the interpretation of factor analytic results. Hence, readers are given a background of

  19. Constraining local 3-D models of the saturated-zone, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Barr, G.E.; Shannon, S.A.

    1994-01-01

    A qualitative three-dimensional analysis of the saturated zone flow system was performed for a 8 km x 8 km region including the potential Yucca Mountain repository site. Certain recognized geologic features of unknown hydraulic properties were introduced to assess the general response of the flow field to these features. Two of these features, the Solitario Canyon fault and the proposed fault in Drill Hole Wash, appear to constrain flow and allow calibration

  20. Ring-constrained Join

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Karras, Panagiotis; Mamoulis, Nikos

    2008-01-01

    . This new operation has important applications in decision support, e.g., placing recycling stations at fair locations between restaurants and residential complexes. Clearly, RCJ is defined based on a geometric constraint but not on distances between points. Thus, our operation is fundamentally different......We introduce a novel spatial join operator, the ring-constrained join (RCJ). Given two sets P and Q of spatial points, the result of RCJ consists of pairs (p, q) (where p ε P, q ε Q) satisfying an intuitive geometric constraint: the smallest circle enclosing p and q contains no other points in P, Q...

  1. Does skull morphology constrain bone ornamentation? A morphometric analysis in the Crocodylia.

    Science.gov (United States)

    Clarac, F; Souter, T; Cubo, J; de Buffrénil, V; Brochu, C; Cornette, R

    2016-08-01

    Previous quantitative assessments of the crocodylians' dermal bone ornamentation (this ornamentation consists of pits and ridges) has shown that bone sculpture results in a gain in area that differs between anatomical regions: it tends to be higher on the skull table than on the snout. Therefore, a comparative phylogenetic analysis within 17 adult crocodylian specimens representative of the morphological diversity of the 24 extant species has been performed, in order to test if the gain in area due to ornamentation depends on the skull morphology, i.e. shape and size. Quantitative assessment of skull size and shape through geometric morphometrics, and of skull ornamentation through surface analyses, produced a dataset that was analyzed using phylogenetic least-squares regression. The analyses reveal that none of the variables that quantify ornamentation, be they on the snout or the skull table, is correlated with the size of the specimens. Conversely, there is more disparity in the relationships between skull conformations (longirostrine vs. brevirostrine) and ornamentation. Indeed, both parameters GApit (i.e. pit depth and shape) and OArelat (i.e. relative area of the pit set) are negatively correlated with snout elongation, whereas none of the values quantifying ornamentation on the skull table is correlated with skull conformation. It can be concluded that bone sculpture on the snout is influenced by different developmental constrains than on the skull table and is sensible to differences in the local growth 'context' (allometric processes) prevailing in distinct skull parts. Whatever the functional role of bone ornamentation on the skull, if any, it seems to be restricted to some anatomical regions at least for the longirostrine forms that tend to lose ornamentation on the snout. © 2016 Anatomical Society.

  2. CP properties of symmetry-constrained two-Higgs-doublet models

    CERN Document Server

    Ferreira, P M; Nachtmann, O; Silva, Joao P

    2010-01-01

    The two-Higgs-doublet model can be constrained by imposing Higgs-family symmetries and/or generalized CP symmetries. It is known that there are only six independent classes of such symmetry-constrained models. We study the CP properties of all cases in the bilinear formalism. An exact symmetry implies CP conservation. We show that soft breaking of the symmetry can lead to spontaneous CP violation (CPV) in three of the classes.

  3. Modelling and Vibration Control of Beams with Partially Debonded Active Constrained Layer Damping Patch

    Science.gov (United States)

    SUN, D.; TONG, L.

    2002-05-01

    A detailed model for the beams with partially debonded active constraining damping (ACLD) treatment is presented. In this model, the transverse displacement of the constraining layer is considered to be non-identical to that of the host structure. In the perfect bonding region, the viscoelastic core is modelled to carry both peel and shear stresses, while in the debonding area, it is assumed that no peel and shear stresses be transferred between the host beam and the constraining layer. The adhesive layer between the piezoelectric sensor and the host beam is also considered in this model. In active control, the positive position feedback control is employed to control the first mode of the beam. Based on this model, the incompatibility of the transverse displacements of the active constraining layer and the host beam is investigated. The passive and active damping behaviors of the ACLD patch with different thicknesses, locations and lengths are examined. Moreover, the effects of debonding of the damping layer on both passive and active control are examined via a simulation example. The results show that the incompatibility of the transverse displacements is remarkable in the regions near the ends of the ACLD patch especially for the high order vibration modes. It is found that a thinner damping layer may lead to larger shear strain and consequently results in a larger passive and active damping. In addition to the thickness of the damping layer, its length and location are also key factors to the hybrid control. The numerical results unveil that edge debonding can lead to a reduction of both passive and active damping, and the hybrid damping may be more sensitive to the debonding of the damping layer than the passive damping.

  4. Mathematical Modeling of Constrained Hamiltonian Systems

    NARCIS (Netherlands)

    Schaft, A.J. van der; Maschke, B.M.

    1995-01-01

    Network modelling of unconstrained energy conserving physical systems leads to an intrinsic generalized Hamiltonian formulation of the dynamics. Constrained energy conserving physical systems are directly modelled as implicit Hamiltonian systems with regard to a generalized Dirac structure on the

  5. Screw Theory Based Singularity Analysis of Lower-Mobility Parallel Robots considering the Motion/Force Transmissibility and Constrainability

    Directory of Open Access Journals (Sweden)

    Xiang Chen

    2015-01-01

    Full Text Available Singularity is an inherent characteristic of parallel robots and is also a typical mathematical problem in engineering application. In general, to identify singularity configuration, the singular solution in mathematics should be derived. This work introduces an alternative approach to the singularity identification of lower-mobility parallel robots considering the motion/force transmissibility and constrainability. The theory of screws is used as the mathematic tool to define the transmission and constraint indices of parallel robots. The singularity is hereby classified into four types concerning both input and output members of a parallel robot, that is, input transmission singularity, output transmission singularity, input constraint singularity, and output constraint singularity. Furthermore, we take several typical parallel robots as examples to illustrate the process of singularity analysis. Particularly, the input and output constraint singularities which are firstly proposed in this work are depicted in detail. The results demonstrate that the method can not only identify all possible singular configurations, but also explain their physical meanings. Therefore, the proposed approach is proved to be comprehensible and effective in solving singularity problems in parallel mechanisms.

  6. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  7. Constrained bidirectional propagation and stroke segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Mori, S; Gillespie, W; Suen, C Y

    1983-03-01

    A new method for decomposing a complex figure into its constituent strokes is described. This method, based on constrained bidirectional propagation, is suitable for parallel processing. Examples of its application to the segmentation of Chinese characters are presented. 9 references.

  8. Multiple factor analysis by example using R

    CERN Document Server

    Pagès, Jérôme

    2014-01-01

    Multiple factor analysis (MFA) enables users to analyze tables of individuals and variables in which the variables are structured into quantitative, qualitative, or mixed groups. Written by the co-developer of this methodology, Multiple Factor Analysis by Example Using R brings together the theoretical and methodological aspects of MFA. It also includes examples of applications and details of how to implement MFA using an R package (FactoMineR).The first two chapters cover the basic factorial analysis methods of principal component analysis (PCA) and multiple correspondence analysis (MCA). The

  9. What Enables and Constrains the Inclusion of the Social Determinants of Health Inequities in Government Policy Agendas? A Narrative Review.

    Science.gov (United States)

    Baker, Phillip; Friel, Sharon; Kay, Adrian; Baum, Fran; Strazdins, Lyndall; Mackean, Tamara

    2017-11-11

    Despite decades of evidence gathering and calls for action, few countries have systematically attenuated health inequities (HI) through action on the social determinants of health (SDH). This is at least partly because doing so presents a significant political and policy challenge. This paper explores this challenge through a review of the empirical literature, asking: what factors have enabled and constrained the inclusion of the social determinants of health inequities (SDHI) in government policy agendas? A narrative review method was adopted involving three steps: first, drawing upon political science theories on agenda-setting, an integrated theoretical framework was developed to guide the review; second, a systematic search of scholarly databases for relevant literature; and third, qualitative analysis of the data and thematic synthesis of the results. Studies were included if they were empirical, met specified quality criteria, and identified factors that enabled or constrained the inclusion of the SDHI in government policy agendas. A total of 48 studies were included in the final synthesis, with studies spanning a number of country-contexts and jurisdictional settings, and employing a diversity of theoretical frameworks. Influential factors included the ways in which the SDHI were framed in public, media and political discourse; emerging data and evidence describing health inequalities; limited supporting evidence and misalignment of proposed solutions with existing policy and institutional arrangements; institutionalised norms and ideologies (ie, belief systems) that are antithetical to a SDH approach including neoliberalism, the medicalisation of health and racism; civil society mobilization; leadership; and changes in government. A complex set of interrelated, context-dependent and dynamic factors influence the inclusion or neglect of the SDHI in government policy agendas. It is better to think about these factors as increasing (or decreasing) the

  10. Active constrained layer damping treatments for shell structures: a deep-shell theory, some intuitive results, and an energy analysis

    Science.gov (United States)

    Shen, I. Y.

    1997-02-01

    This paper studies vibration control of a shell structure through use of an active constrained layer (ACL) damping treatment. A deep-shell theory that assumes arbitrary Lamé parameters 0964-1726/6/1/011/img1 and 0964-1726/6/1/011/img2 is first developed. Application of Hamilton's principle leads to the governing Love equations, the charge equation of electrostatics, and the associated boundary conditions. The Love equations and boundary conditions imply that the control action of the ACL for shell treatments consists of two components: free-end boundary actuation and membrane actuation. The free-end boundary actuation is identical to that of beam and plate ACL treatments, while the membrane actuation is unique to shell treatments as a result of the curvatures of the shells. In particular, the membrane actuation may reinforce or counteract the boundary actuation, depending on the location of the ACL treatment. Finally, an energy analysis is developed to determine the proper control law that guarantees the stability of ACL shell treatments. Moreover, the energy analysis results in a simple rule predicting whether or not the membrane actuation reinforces the boundary actuation.

  11. Micro-economic analysis of the physical constrained markets: game theory application to competitive electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Bompard, E.; Ma, Y.C. [Politecnico di Torino, Dept. of Electrical Engineering, Torino (Italy); Ragazzi, E. [CERIS, Institute for Economic Research on Firms and Growth, CNR, National Research Council, Moncalieri, TO (Italy)

    2006-03-15

    Competition has been introduced in the electricity markets with the goal of reducing prices and improving efficiency. The basic idea which stays behind this choice is that, in competitive markets, a greater quantity of the good is exchanged at a lower price, leading to higher market efficiency. Electricity markets are pretty different from other commodities mainly due to the physical constraints related to the network structure that may impact the market performance. The network structure of the system on which the economic transactions needs to be undertaken poses strict physical and operational constraints. Strategic interactions among producers that game the market with the objective of maximizing their producer surplus must be taken into account when modeling competitive electricity markets. The physical constraints, specific of the electricity markets, provide additional opportunity of gaming to the market players. Game theory provides a tool to model such a context. This paper discussed the application of game theory to physical constrained electricity markets with the goal of providing tools for assessing the market performance and pinpointing the critical network constraints that may impact the market efficiency. The basic models of game theory specifically designed to represent the electricity markets will be presented. IEEE30 bus test system of the constrained electricity market will be discussed to show the network impacts on the market performances in presence of strategic bidding behavior of the producers. (authors)

  12. Micro-economic analysis of the physical constrained markets: game theory application to competitive electricity markets

    Science.gov (United States)

    Bompard, E.; Ma, Y. C.; Ragazzi, E.

    2006-03-01

    Competition has been introduced in the electricity markets with the goal of reducing prices and improving efficiency. The basic idea which stays behind this choice is that, in competitive markets, a greater quantity of the good is exchanged at a lower price, leading to higher market efficiency. Electricity markets are pretty different from other commodities mainly due to the physical constraints related to the network structure that may impact the market performance. The network structure of the system on which the economic transactions need to be undertaken poses strict physical and operational constraints. Strategic interactions among producers that game the market with the objective of maximizing their producer surplus must be taken into account when modeling competitive electricity markets. The physical constraints, specific of the electricity markets, provide additional opportunity of gaming to the market players. Game theory provides a tool to model such a context. This paper discussed the application of game theory to physical constrained electricity markets with the goal of providing tools for assessing the market performance and pinpointing the critical network constraints that may impact the market efficiency. The basic models of game theory specifically designed to represent the electricity markets will be presented. IEEE30 bus test system of the constrained electricity market will be discussed to show the network impacts on the market performances in presence of strategic bidding behavior of the producers.

  13. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches.

    Science.gov (United States)

    Guenole, Nigel

    2018-01-01

    The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository.

  14. Wronskian type solutions for the vector k-constrained KP hierarchy

    International Nuclear Information System (INIS)

    Zhang Youjin.

    1995-07-01

    Motivated by a relation of the 1-constrained Kadomtsev-Petviashvili (KP) hierarchy with the 2 component KP hierarchy, the tau-conditions of the vector k-constrained KP hierarchy are constructed by using an analogue of the Baker-Akhiezer (m + 1)-point function. These tau functions are expressed in terms of Wronskian type determinants. (author). 20 refs

  15. Analysis of mineral phases in coal utilizing factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, P.K.

    1982-01-01

    The mineral phase inclusions of coal are discussed. The contribution of these to a coal sample are determined utilizing several techniques. Neutron activation analysis in conjunction with coal washability studies have produced some information on the general trends of elemental variation in the mineral phases. These results have been enhanced by the use of various statistical techniques. The target transformation factor analysis is specifically discussed and shown to be able to produce elemental profiles of the mineral phases in coal. A data set consisting of physically fractionated coal samples was generated. These samples were analyzed by neutron activation analysis and then their elemental concentrations examined using TTFA. Information concerning the mineral phases in coal can thus be acquired from factor analysis even with limited data. Additional data may permit the resolution of additional mineral phases as well as refinement of theose already identified

  16. A constrained supersymmetric left-right model

    Energy Technology Data Exchange (ETDEWEB)

    Hirsch, Martin [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Krauss, Manuel E. [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Opferkuch, Toby [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Porod, Werner [Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Staub, Florian [Theory Division, CERN,1211 Geneva 23 (Switzerland)

    2016-03-02

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model’s capability to explain current anomalies observed at the LHC.

  17. A Constrained Algorithm Based NMFα for Image Representation

    Directory of Open Access Journals (Sweden)

    Chenxue Yang

    2014-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a useful tool in learning a basic representation of image data. However, its performance and applicability in real scenarios are limited because of the lack of image information. In this paper, we propose a constrained matrix decomposition algorithm for image representation which contains parameters associated with the characteristics of image data sets. Particularly, we impose label information as additional hard constraints to the α-divergence-NMF unsupervised learning algorithm. The resulted algorithm is derived by using Karush-Kuhn-Tucker (KKT conditions as well as the projected gradient and its monotonic local convergence is proved by using auxiliary functions. In addition, we provide a method to select the parameters to our semisupervised matrix decomposition algorithm in the experiment. Compared with the state-of-the-art approaches, our method with the parameters has the best classification accuracy on three image data sets.

  18. Comparison of preconditioned Krylov subspace iteration methods for PDE-constrained optimization problems - Poisson and convection-diffusion control

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Farouq, S.; Neytcheva, M.

    2016-01-01

    Roč. 73, č. 3 (2016), s. 631-633 ISSN 1017-1398 R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : PDE-constrained optimization problems * finite elements * iterative solution methods Subject RIV: BA - General Mathematics Impact factor: 1.241, year: 2016 http://link.springer.com/article/10.1007%2Fs11075-016-0111-1

  19. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    Science.gov (United States)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  20. A multi-objective improved teaching-learning based optimization algorithm for unconstrained and constrained optimization problems

    Directory of Open Access Journals (Sweden)

    R. Venkata Rao

    2014-01-01

    Full Text Available The present work proposes a multi-objective improved teaching-learning based optimization (MO-ITLBO algorithm for unconstrained and constrained multi-objective function optimization. The MO-ITLBO algorithm is the improved version of basic teaching-learning based optimization (TLBO algorithm adapted for multi-objective problems. The basic TLBO algorithm is improved to enhance its exploration and exploitation capacities by introducing the concept of number of teachers, adaptive teaching factor, tutorial training and self-motivated learning. The MO-ITLBO algorithm uses a grid-based approach to adaptively assess the non-dominated solutions (i.e. Pareto front maintained in an external archive. The performance of the MO-ITLBO algorithm is assessed by implementing it on unconstrained and constrained test problems proposed for the Congress on Evolutionary Computation 2009 (CEC 2009 competition. The performance assessment is done by using the inverted generational distance (IGD measure. The IGD measures obtained by using the MO-ITLBO algorithm are compared with the IGD measures of the other state-of-the-art algorithms available in the literature. Finally, Lexicographic ordering is used to assess the overall performance of competitive algorithms. Results have shown that the proposed MO-ITLBO algorithm has obtained the 1st rank in the optimization of unconstrained test functions and the 3rd rank in the optimization of constrained test functions.

  1. Followee recommendation in microblog using matrix factorization model with structural regularization.

    Science.gov (United States)

    Yu, Yan; Qiu, Robin G

    2014-01-01

    Microblog that provides us a new communication and information sharing platform has been growing exponentially since it emerged just a few years ago. To microblog users, recommending followees who can serve as high quality information sources is a competitive service. To address this problem, in this paper we propose a matrix factorization model with structural regularization to improve the accuracy of followee recommendation in microblog. More specifically, we adapt the matrix factorization model in traditional item recommender systems to followee recommendation in microblog and use structural regularization to exploit structure information of social network to constrain matrix factorization model. The experimental analysis on a real-world dataset shows that our proposed model is promising.

  2. Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF

    DEFF Research Database (Denmark)

    Duan, Chong; Kallehauge, Jesper F.; Pérez-Torres, Carlos J

    2018-01-01

    PURPOSE: This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. PROCEDURES....... RESULTS: When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels...

  3. New Exact Penalty Functions for Nonlinear Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Bingzhuang Liu

    2014-01-01

    Full Text Available For two kinds of nonlinear constrained optimization problems, we propose two simple penalty functions, respectively, by augmenting the dimension of the primal problem with a variable that controls the weight of the penalty terms. Both of the penalty functions enjoy improved smoothness. Under mild conditions, it can be proved that our penalty functions are both exact in the sense that local minimizers of the associated penalty problem are precisely the local minimizers of the original constrained problem.

  4. Constrain the SED Type of Unidentified Fermi Objects

    Directory of Open Access Journals (Sweden)

    An-Li Tsai

    2013-09-01

    Full Text Available 2FGL J1823.8+4312 and 2FGL J1304.1-2415 are two unidentified Fermi objects which are associated with cluster of galaxies. In order to exam the possibility of cluster of galaxies as gamma-ray emitters, we search for counterpart of these two unidentified Fermi objects in other wavebands. However, we find other candidate to be more likely the counterpart of the unidentified Fermi object for both sources. We compare their light curves and SEDs in order to identify their source types. However, data at millimeter and sub-millimeter wavebands, which is important for us to constrain the SED at synchrotron peak, is lacking of measurement. Therefore, we proposed to SMA observation for these two sources. We have got data and are doing further analysis.

  5. Bounds on the Capacity of Weakly constrained two-dimensional Codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2002-01-01

    Upper and lower bounds are presented for the capacity of weakly constrained two-dimensional codes. The maximum entropy is calculated for two simple models of 2-D codes constraining the probability of neighboring 1s as an example. For given models of the coded data, upper and lower bounds...... on the capacity for 2-D channel models based on occurrences of neighboring 1s are considered....

  6. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  7. Bidirectional Dynamic Diversity Evolutionary Algorithm for Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Weishang Gao

    2013-01-01

    Full Text Available Evolutionary algorithms (EAs were shown to be effective for complex constrained optimization problems. However, inflexible exploration-exploitation and improper penalty in EAs with penalty function would lead to losing the global optimum nearby or on the constrained boundary. To determine an appropriate penalty coefficient is also difficult in most studies. In this paper, we propose a bidirectional dynamic diversity evolutionary algorithm (Bi-DDEA with multiagents guiding exploration-exploitation through local extrema to the global optimum in suitable steps. In Bi-DDEA potential advantage is detected by three kinds of agents. The scale and the density of agents will change dynamically according to the emerging of potential optimal area, which play an important role of flexible exploration-exploitation. Meanwhile, a novel double optimum estimation strategy with objective fitness and penalty fitness is suggested to compute, respectively, the dominance trend of agents in feasible region and forbidden region. This bidirectional evolving with multiagents can not only effectively avoid the problem of determining penalty coefficient but also quickly converge to the global optimum nearby or on the constrained boundary. By examining the rapidity and veracity of Bi-DDEA across benchmark functions, the proposed method is shown to be effective.

  8. Risk-constrained self-scheduling of a fuel and emission constrained power producer using rolling window procedure

    International Nuclear Information System (INIS)

    Kazempour, S. Jalal; Moghaddam, Mohsen Parsa

    2011-01-01

    This work addresses a relevant methodology for self-scheduling of a price-taker fuel and emission constrained power producer in day-ahead correlated energy, spinning reserve and fuel markets to achieve a trade-off between the expected profit and the risk versus different risk levels based on Markowitz's seminal work in the area of portfolio selection. Here, a set of uncertainties including price forecasting errors and available fuel uncertainty are considered. The latter uncertainty arises because of uncertainties in being called for reserve deployment in the spinning reserve market and availability of power plant. To tackle the price forecasting errors, variances of energy, spinning reserve and fuel prices along with their covariances which are due to markets correlation are taken into account using relevant historical data. In order to tackle available fuel uncertainty, a framework for self-scheduling referred to as rolling window is proposed. This risk-constrained self-scheduling framework is therefore formulated and solved as a mixed-integer non-linear programming problem. Furthermore, numerical results for a case study are discussed. (author)

  9. Analysis of major risk factors affecting those working in the agrarian sector (based on a sociological survey).

    Science.gov (United States)

    Krekoten, Olena M; Dereziuk, Anatolii V; Ihnaschuk, Olena V; Holovchanska, Svitlana E

    Issues related to labour potential, its state and problems have consistently been a focus of attention for the International Labour Organisation (ILO). Its respective analysis shows that labour potential problems remain unresolved in many countries of the world. According to the World Health Organisation (WHO), adverse working conditions are among major factors of occupational disease development in Europe and the reason for disabilities of economically active population during 2.5% of their lifetime. The aim of the present study is to identify and analyse major risk factors, which have a bearing on people working in agriculture in the course of exercising their occupation, with account of forms of ownership of agricultural enterprises. Carried out was a cross-sectional study involving a sociological survey of 412 respondents - those working in agriculture - who made up the primary group and the control group. The study revealed 21 risk factors, 9 of which were work-related. A modified elementary cybernetic model of studying impact efficiency was developed with the view of carrying out a structural analysis of the sample group and choosing relevant methodological approaches. It has been established that harmful factors related to working environment and one's lifestyle are decisive in the agrarian sector, particularly for workers of privately owned businesses. For one out of three respondents harmful working conditions manifested themselves as industrial noise (31.7±3.4), vibration (29.0±2.1) trunk bending and constrained working posture (36.6±3.4). The vast majority of agricultural workers (91.6±2.5) admitted they could not afford proper rest during their annual leave; male respondents abused alcohol (70.6±3.0) and smoking (41.4±2.0 per 100 workers). The research established the structure of risk factors, which is sequentially represented by the following groups: behavioral (smoking, drinking of alcohol, rest during annual leave, physical culture), working

  10. Constraining spatial variations of the fine-structure constant in symmetron models

    Directory of Open Access Journals (Sweden)

    A.M.M. Pinho

    2017-06-01

    Full Text Available We introduce a methodology to test models with spatial variations of the fine-structure constant α, based on the calculation of the angular power spectrum of these measurements. This methodology enables comparisons of observations and theoretical models through their predictions on the statistics of the α variation. Here we apply it to the case of symmetron models. We find no indications of deviations from the standard behavior, with current data providing an upper limit to the strength of the symmetron coupling to gravity (log⁡β2<−0.9 when this is the only free parameter, and not able to constrain the model when also the symmetry breaking scale factor aSSB is free to vary.

  11. The constrained control of force and position in multi-joint movements.

    Science.gov (United States)

    van Ingen Schenau, G J; Boots, P J; de Groot, G; Snackers, R J; van Woensel, W W

    1992-01-01

    In many arm or leg movements the hand or foot has to exert an external force on the environment. Based on an inverse dynamical analysis of cycling, it is shown that the distribution of net moments in the joints needed to control the direction of the external force is often opposite to the direction of joint displacements associated with this task. Kinetic and kinematic data were obtained from five experienced cyclists during ergometer cycling by means of film analysis and pedal force measurement. An inverse dynamic analysis, based on a linked segments model, yielded net joint moments, joint powers and muscle shortening velocities of eight leg muscles. Activation patterns of the muscles were obtained by means of surface electromyography. The results show that the transfer of rotations in hip, knee and ankle joints into the translation of the pedal is constrained by conflicting requirements. This occurs between the joint moments necessary to contribute to joint power and the moments necessary to establish a direction of the force on the pedal which allows this force to do work on the pedal. Co-activation of mono-articular agonists and their bi-articular antagonists appear to provide a unique solution for these conflicting requirements: bi-articular muscles appear to be able to control the desired direction of the external force on the pedal by adjusting the relative distribution of net moments over the joints while mono-articular muscles appear to be primarily activated when they are in the position to shorten and thus to contribute to positive work. Examples are given to illustrate the universal nature of this constrained control of force (external) and position (joint). Based on this study and published data it is suggested that different processes may underlie the organization of the control of mono- and bi-articular muscles.

  12. Constrained systems described by Nambu mechanics

    International Nuclear Information System (INIS)

    Lassig, C.C.; Joshi, G.C.

    1996-01-01

    Using the framework of Nambu's generalised mechanics, we obtain a new description of constrained Hamiltonian dynamics, involving the introduction of another degree of freedom in phase space, and the necessity of defining the action integral on a world sheet. We also discuss the problem of quantizing Nambu mechanics. (authors). 5 refs

  13. Neuroevolutionary Constrained Optimization for Content Creation

    DEFF Research Database (Denmark)

    Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian

    2011-01-01

    and thruster types and topologies) independently of game physics and steering strategies. According to the proposed framework, the designer picks a set of requirements for the spaceship that a constrained optimizer attempts to satisfy. The constraint satisfaction approach followed is based on neuroevolution...... and survival tasks and are also visually appealing....

  14. Grouping puts figure-ground assignment in context by constraining propagation of edge assignment.

    Science.gov (United States)

    Brooks, Joseph L; Brook, Joseph L; Driver, Jon

    2010-05-01

    Figure-ground organization involves the assignment of edges to a figural shape on one or the other side of each dividing edge. Established visual cues for edge assignment primarily concern relatively local rather than contextual factors. In the present article, we show that an assignment for a locally unbiased edge can be affected by an assignment of a remote contextual edge that has its own locally biased assignment. We find that such propagation of edge assignment from the biased remote context occurs only when the biased and unbiased edges are grouped. This new principle, whereby grouping constrains the propagation of figural edge assignment, emerges from both subjective reports and an objective short-term edge-matching task. It generalizes from moving displays involving grouping by common fate and collinearity, to static displays with grouping by similarity of edge-contrast polarity, or apparent occlusion. Our results identify a new contextual influence on edge assignment. They also identify a new mechanistic relation between grouping and figure-ground processes, whereby grouping between remote elements can constrain the propagation of edge assignment between those elements. Supplemental materials for this article may be downloaded from http://app.psychonomic-journals.org/content/supplemental.

  15. Preparation and biological evaluation of conformationally constrained BACE1 inhibitors.

    Science.gov (United States)

    Winneroski, Leonard L; Schiffler, Matthew A; Erickson, Jon A; May, Patrick C; Monk, Scott A; Timm, David E; Audia, James E; Beck, James P; Boggs, Leonard N; Borders, Anthony R; Boyer, Robert D; Brier, Richard A; Hudziak, Kevin J; Klimkowski, Valentine J; Garcia Losada, Pablo; Mathes, Brian M; Stout, Stephanie L; Watson, Brian M; Mergott, Dustin J

    2015-07-01

    The BACE1 enzyme is a key target for Alzheimer's disease. During our BACE1 research efforts, fragment screening revealed that bicyclic thiazine 3 had low millimolar activity against BACE1. Analysis of the co-crystal structure of 3 suggested that potency could be increased through extension toward the S3 pocket and through conformational constraint of the thiazine core. Pursuit of S3-binding groups produced low micromolar inhibitor 6, which informed the S3-design for constrained analogs 7 and 8, themselves prepared via independent, multi-step synthetic routes. Biological characterization of BACE inhibitors 6-8 is described. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  17. On the convergence of the dynamic series solution of a constrained ...

    African Journals Online (AJOL)

    The one dimensional problem of analysing the dynamic behaviour of an elevated water tower with elastic deflection–control device and subjected to a dynamic load was examined in [2]. The constrained elastic system was modeled as a column carrying a concentrated mass at its top and elastically constrained at a point ...

  18. Exploring the Metabolic and Perceptual Correlates of Self-Selected Walking Speed under Constrained and Un-Constrained Conditions

    Directory of Open Access Journals (Sweden)

    David T Godsiff, Shelly Coe, Charlotte Elsworth-Edelsten, Johnny Collett, Ken Howells, Martyn Morris, Helen Dawes

    2018-03-01

    Full Text Available Mechanisms underpinning self-selected walking speed (SSWS are poorly understood. The present study investigated the extent to which SSWS is related to metabolism, energy cost, and/or perceptual parameters during both normal and artificially constrained walking. Fourteen participants with no pathology affecting gait were tested under standard conditions. Subjects walked on a motorized treadmill at speeds derived from their SSWS as a continuous protocol. RPE scores (CR10 and expired air to calculate energy cost (J.kg-1.m-1 and carbohydrate (CHO oxidation rate (J.kg-1.min-1 were collected during minutes 3-4 at each speed. Eight individuals were re-tested under the same conditions within one week with a hip and knee-brace to immobilize their right leg. Deflection in RPE scores (CR10 and CHO oxidation rate (J.kg-1.min-1 were not related to SSWS (five and three people had deflections in the defined range of SSWS in constrained and unconstrained conditions, respectively (p > 0.05. Constrained walking elicited a higher energy cost (J.kg-1.m-1 and slower SSWS (p 0.05. SSWS did not occur at a minimum energy cost (J.kg-1.m-1 in either condition, however, the size of the minimum energy cost to SSWS disparity was the same (Froude {Fr} = 0.09 in both conditions (p = 0.36. Perceptions of exertion can modify walking patterns and therefore SSWS and metabolism/ energy cost are not directly related. Strategies which minimize perceived exertion may enable faster walking in people with altered gait as our findings indicate they should self-optimize to the same extent under different conditions.

  19. Color constrains depth in da Vinci stereopsis for camouflage but not occlusion.

    Science.gov (United States)

    Wardle, Susan G; Gillam, Barbara J

    2013-12-01

    Monocular regions that occur with binocular viewing of natural scenes can produce a strong perception of depth--"da Vinci stereopsis." They occur either when part of the background is occluded in one eye, or when a nearer object is camouflaged against a background surface in one eye's view. There has been some controversy over whether da Vinci depth is constrained by geometric or ecological factors. Here we show that the color of the monocular region constrains the depth perceived from camouflage, but not occlusion, as predicted by ecological considerations. Quantitative depth was found in both cases, but for camouflage only when the color of the monocular region matched the binocular background. Unlike previous reports, depth failed even when nonmatching colors satisfied conditions for perceptual transparency. We show that placing a colored line at the boundary between the binocular and monocular regions is sufficient to eliminate depth from camouflage. When both the background and the monocular region contained vertical contours that could be fused, some observers appeared to use fusion, and others da Vinci constraints, supporting the existence of a separate da Vinci mechanism. The results show that da Vinci stereopsis incorporates color constraints and is more complex than previously assumed.

  20. Priority classes and weighted constrained equal awards rules for the claims problem

    DEFF Research Database (Denmark)

    Szwagrzak, Karol

    2015-01-01

    . They are priority-augmented versions of the standard weighted constrained equal awards rules, also known as weighted gains methods (Moulin, 2000): individuals are sorted into priority classes; the resource is distributed among the individuals in the first priority class using a weighted constrained equal awards...... rule; if some of the resource is left over, then it is distributed among the individuals in the second priority class, again using a weighted constrained equal awards rule; the distribution carries on in this way until the resource is exhausted. Our characterization extends to a generalized version...

  1. Node Discovery and Interpretation in Unstructured Resource-Constrained Environments

    DEFF Research Database (Denmark)

    Gechev, Miroslav; Kasabova, Slavyana; Mihovska, Albena D.

    2014-01-01

    for the discovery, linking and interpretation of nodes in unstructured and resource-constrained network environments and their interrelated and collective use for the delivery of smart services. The model is based on a basic mathematical approach, which describes and predicts the success of human interactions...... in the context of long-term relationships and identifies several key variables in the context of communications in resource-constrained environments. The general theoretical model is described and several algorithms are proposed as part of the node discovery, identification, and linking processes in relation...

  2. Logistic regression analysis of risk factors for postoperative recurrence of spinal tumors and analysis of prognostic factors.

    Science.gov (United States)

    Zhang, Shanyong; Yang, Lili; Peng, Chuangang; Wu, Minfei

    2018-02-01

    The aim of the present study was to investigate the risk factors for postoperative recurrence of spinal tumors by logistic regression analysis and analysis of prognostic factors. In total, 77 male and 48 female patients with spinal tumor were selected in our hospital from January, 2010 to December, 2015 and divided into the benign (n=76) and malignant groups (n=49). All the patients underwent microsurgical resection of spinal tumors and were reviewed regularly 3 months after operation. The McCormick grading system was used to evaluate the postoperative spinal cord function. Data were subjected to statistical analysis. Of the 125 cases, 63 cases showed improvement after operation, 50 cases were stable, and deterioration was found in 12 cases. The improvement rate of patients with cervical spine tumor, which reached 56.3%, was the highest. Fifty-two cases of sensory disturbance, 34 cases of pain, 30 cases of inability to exercise, 26 cases of ataxia, and 12 cases of sphincter disorders were found after operation. Seventy-two cases (57.6%) underwent total resection, 18 cases (14.4%) received subtotal resection, 23 cases (18.4%) received partial resection, and 12 cases (9.6%) were only treated with biopsy/decompression. Postoperative recurrence was found in 57 cases (45.6%). The mean recurrence time of patients in the malignant group was 27.49±6.09 months, and the mean recurrence time of patients in the benign group was 40.62±4.34. The results were significantly different (Pregression analysis of total resection-related factors showed that total resection should be the preferred treatment for patients with benign tumors, thoracic and lumbosacral tumors, and lower McCormick grade, as well as patients without syringomyelia and intramedullary tumors. Logistic regression analysis of recurrence-related factors revealed that the recurrence rate was relatively higher in patients with malignant, cervical, thoracic and lumbosacral, intramedullary tumors, and higher Mc

  3. Small-kernel constrained-least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-10-01

    Constrained least-squares image restoration, first proposed by Hunt twenty years ago, is a linear image restoration technique in which the restoration filter is derived by maximizing the smoothness of the restored image while satisfying a fidelity constraint related to how well the restored image matches the actual data. The traditional derivation and implementation of the constrained least-squares restoration filter is based on an incomplete discrete/discrete system model which does not account for the effects of spatial sampling and image reconstruction. For many imaging systems, these effects are significant and should not be ignored. In a recent paper Park demonstrated that a derivation of the Wiener filter based on the incomplete discrete/discrete model can be extended to a more comprehensive end-to-end, continuous/discrete/continuous model. In a similar way, in this paper, we show that a derivation of the constrained least-squares filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model and, by so doing, an improved restoration filter is derived. Building on previous work by Reichenbach and Park for the Wiener filter, we also show that this improved constrained least-squares restoration filter can be efficiently implemented as a small-kernel convolution in the spatial domain.

  4. A penalty method for PDE-constrained optimization in inverse problems

    International Nuclear Information System (INIS)

    Leeuwen, T van; Herrmann, F J

    2016-01-01

    Many inverse and parameter estimation problems can be written as PDE-constrained optimization problems. The goal is to infer the parameters, typically coefficients of the PDE, from partial measurements of the solutions of the PDE for several right-hand sides. Such PDE-constrained problems can be solved by finding a stationary point of the Lagrangian, which entails simultaneously updating the parameters and the (adjoint) state variables. For large-scale problems, such an all-at-once approach is not feasible as it requires storing all the state variables. In this case one usually resorts to a reduced approach where the constraints are explicitly eliminated (at each iteration) by solving the PDEs. These two approaches, and variations thereof, are the main workhorses for solving PDE-constrained optimization problems arising from inverse problems. In this paper, we present an alternative method that aims to combine the advantages of both approaches. Our method is based on a quadratic penalty formulation of the constrained optimization problem. By eliminating the state variable, we develop an efficient algorithm that has roughly the same computational complexity as the conventional reduced approach while exploiting a larger search space. Numerical results show that this method indeed reduces some of the nonlinearity of the problem and is less sensitive to the initial iterate. (paper)

  5. EXPLORATORY FACTOR ANALYSIS (EFA IN CONSUMER BEHAVIOR AND MARKETING RESEARCH

    Directory of Open Access Journals (Sweden)

    Marcos Pascual Soler

    2012-06-01

    Full Text Available Exploratory Factor Analysis (EFA is one of the most widely used statistical procedures in social research. The main objective of this work is to describe the most common practices used by researchers in the consumer behavior and marketing area. Through a literature review methodology the practices of AFE in five consumer behavior and marketing journals(2000-2010 were analyzed. Then, the choices made by the researchers concerning factor model, retention criteria, rotation, factors interpretation and other relevant issues to factor analysis were analized. The results suggest that researchers routinely conduct analyses using such questionable methods. Suggestions for improving the use of factor analysis and the reporting of results are presented and a checklist (Exploratory Factor Analysis Checklist, EFAC is provided to help editors, reviewers, and authors improve reporting exploratory factor analysis.

  6. Physics Metacognition Inventory Part II: Confirmatory factor analysis and Rasch analysis

    Science.gov (United States)

    Taasoobshirazi, Gita; Bailey, MarLynn; Farley, John

    2015-11-01

    The Physics Metacognition Inventory was developed to measure physics students' metacognition for problem solving. In one of our earlier studies, an exploratory factor analysis provided evidence of preliminary construct validity, revealing six components of students' metacognition when solving physics problems including knowledge of cognition, planning, monitoring, evaluation, debugging, and information management. The college students' scores on the inventory were found to be reliable and related to students' physics motivation and physics grade. However, the results of the exploratory factor analysis indicated that the questionnaire could be revised to improve its construct validity. The goal of this study was to revise the questionnaire and establish its construct validity through a confirmatory factor analysis. In addition, a Rasch analysis was applied to the data to better understand the psychometric properties of the inventory and to further evaluate the construct validity. Results indicated that the final, revised inventory is a valid, reliable, and efficient tool for assessing student metacognition for physics problem solving.

  7. Communication Schemes with Constrained Reordering of Resources

    DEFF Research Database (Denmark)

    Popovski, Petar; Utkovski, Zoran; Trillingsgaard, Kasper Fløe

    2013-01-01

    This paper introduces a communication model inspired by two practical scenarios. The first scenario is related to the concept of protocol coding, where information is encoded in the actions taken by an existing communication protocol. We investigate strategies for protocol coding via combinatorial...... reordering of the labelled user resources (packets, channels) in an existing, primary system. However, the degrees of freedom of the reordering are constrained by the operation of the primary system. The second scenario is related to communication systems with energy harvesting, where the transmitted signals...... are constrained by the energy that is available through the harvesting process. We have introduced a communication model that covers both scenarios and elicits their key feature, namely the constraints of the primary system or the harvesting process. We have shown how to compute the capacity of the channels...

  8. Closed-Loop Control of Constrained Flapping Wing Micro Air Vehicles

    Science.gov (United States)

    2014-03-27

    predicts forces and moments for the class of flapping wing fliers that makes up most insects and hummingbirds. Large bird and butterfly “clap- and...Closed-Loop Control of Constrained Flapping Wing Micro Air Vehicles DISSERTATION Garrison J. Lindholm, Captain, USAF AFIT-ENY-DS-14-M-02 DEPARTMENT...States Air Force, Department of Defense, or the United States Government. AFIT-ENY-DS-14-M-02 Closed-Loop Control of Constrained Flapping Wing Micro Air

  9. Constraining new physics models with isotope shift spectroscopy

    Science.gov (United States)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  10. Constraining the noncommutative spectral action via astrophysical observations.

    Science.gov (United States)

    Nelson, William; Ochoa, Joseph; Sakellariadou, Mairi

    2010-09-03

    The noncommutative spectral action extends our familiar notion of commutative spaces, using the data encoded in a spectral triple on an almost commutative space. Varying a rather simple action, one can derive all of the standard model of particle physics in this setting, in addition to a modified version of Einstein-Hilbert gravity. In this Letter we use observations of pulsar timings, assuming that no deviation from general relativity has been observed, to constrain the gravitational sector of this theory. While the bounds on the coupling constants remain rather weak, they are comparable to existing bounds on deviations from general relativity in other settings and are likely to be further constrained by future observations.

  11. On Tree-Constrained Matchings and Generalizations

    NARCIS (Netherlands)

    S. Canzar (Stefan); K. Elbassioni; G.W. Klau (Gunnar); J. Mestre

    2011-01-01

    htmlabstractWe consider the following \\textsc{Tree-Constrained Bipartite Matching} problem: Given two rooted trees $T_1=(V_1,E_1)$, $T_2=(V_2,E_2)$ and a weight function $w: V_1\\times V_2 \\mapsto \\mathbb{R}_+$, find a maximum weight matching $\\mathcal{M}$ between nodes of the two trees, such that

  12. Pole shifting with constrained output feedback

    International Nuclear Information System (INIS)

    Hamel, D.; Mensah, S.; Boisvert, J.

    1984-03-01

    The concept of pole placement plays an important role in linear, multi-variable, control theory. It has received much attention since its introduction, and several pole shifting algorithms are now available. This work presents a new method which allows practical and engineering constraints such as gain limitation and controller structure to be introduced right into the pole shifting design strategy. This is achieved by formulating the pole placement problem as a constrained optimization problem. Explicit constraints (controller structure and gain limits) are defined to identify an admissible region for the feedback gain matrix. The desired pole configuration is translated into an appropriate cost function which must be closed-loop minimized. The resulting constrained optimization problem can thus be solved with optimization algorithms. The method has been implemented as an algorithmic interactive module in a computer-aided control system design package, MVPACK. The application of the method is illustrated to design controllers for an aircraft and an evaporator. The results illustrate the importance of controller structure on overall performance of a control system

  13. Classification analysis of organization factors related to system safety

    International Nuclear Information System (INIS)

    Liu Huizhen; Zhang Li; Zhang Yuling; Guan Shihua

    2009-01-01

    This paper analyzes the different types of organization factors which influence the system safety. The organization factor can be divided into the interior organization factor and exterior organization factor. The latter includes the factors of political, economical, technical, law, social culture and geographical, and the relationships among different interest groups. The former includes organization culture, communication, decision, training, process, supervision and management and organization structure. This paper focuses on the description of the organization factors. The classification analysis of the organization factors is the early work of quantitative analysis. (authors)

  14. Time Series Factor Analysis with an Application to Measuring Money

    NARCIS (Netherlands)

    Gilbert, Paul D.; Meijer, Erik

    2005-01-01

    Time series factor analysis (TSFA) and its associated statistical theory is developed. Unlike dynamic factor analysis (DFA), TSFA obviates the need for explicitly modeling the process dynamics of the underlying phenomena. It also differs from standard factor analysis (FA) in important respects: the

  15. Profitability analysis of KINGLONG nearly 5 years

    Science.gov (United States)

    Zhang, Mei; Wen, Jinghua

    2017-08-01

    Profitability analysis for measuring business performance and forecast its prospects play an important role. In this paper, the research instance King Long Motor in understanding the basic theory on the basis of financial management, to take a combination of theory and data analysis methods, combined with a measure of profitability related indicators of King Long Motor company’s profitability do a specific analysis to identify factors constraining the profitability of Kinglong company exists and the motivation to improve profitability, which made recommendations to improve the profitability of Kinglong car company to promote the company’s future can be better and faster development.)

  16. Wavelet library for constrained devices

    Science.gov (United States)

    Ehlers, Johan Hendrik; Jassim, Sabah A.

    2007-04-01

    The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.

  17. A comparison study on detection of key geochemical variables and factors through three different types of factor analysis

    Science.gov (United States)

    Hoseinzade, Zohre; Mokhtari, Ahmad Reza

    2017-10-01

    Large numbers of variables have been measured to explain different phenomena. Factor analysis has widely been used in order to reduce the dimension of datasets. Additionally, the technique has been employed to highlight underlying factors hidden in a complex system. As geochemical studies benefit from multivariate assays, application of this method is widespread in geochemistry. However, the conventional protocols in implementing factor analysis have some drawbacks in spite of their advantages. In the present study, a geochemical dataset including 804 soil samples collected from a mining area in central Iran in order to search for MVT type Pb-Zn deposits was considered to outline geochemical analysis through various fractal methods. Routine factor analysis, sequential factor analysis, and staged factor analysis were applied to the dataset after opening the data with (additive logratio) alr-transformation to extract mineralization factor in the dataset. A comparison between these methods indicated that sequential factor analysis has more clearly revealed MVT paragenesis elements in surface samples with nearly 50% variation in F1. In addition, staged factor analysis has given acceptable results while it is easy to practice. It could detect mineralization related elements while larger factor loadings are given to these elements resulting in better pronunciation of mineralization.

  18. Neutron Powder Diffraction and Constrained Refinement

    DEFF Research Database (Denmark)

    Pawley, G. S.; Mackenzie, Gordon A.; Dietrich, O. W.

    1977-01-01

    The first use of a new program, EDINP, is reported. This program allows the constrained refinement of molecules in a crystal structure with neutron diffraction powder data. The structures of p-C6F4Br2 and p-C6F4I2 are determined by packing considerations and then refined with EDINP. Refinement is...

  19. Constraining omega and bias from the Stromlo-APM survey

    International Nuclear Information System (INIS)

    Loveday, J.

    1995-05-01

    Galaxy redshift surveys provide a distorted picture of the universe due to the non-Hubble component of galaxy motions. By measuring such distortions in the linear regime one can constrain the quantity β = Ω 0.6 where Ω is the cosmological density parameter and b is the (linear) bias factor for optically-selected galaxies. In this paper we estimate β from the Stromlo-APM redshift survey by comparing the amplitude of the direction-averaged redshift space correlation function to the real space correlation function. We find a 95% confidence upper limit of β = 0.75, with a 'best estimate' of β ∼ 0.48. A bias parameter b ∼ 2 is thus required if Ω ≡ 1. However, higher-order correlations measured from the APM galaxy survey indicate a low value for the bias parameter b ∼ 1, requiring that Q approx-lt 0.6

  20. A real-time Java tool chain for resource constrained platforms

    DEFF Research Database (Denmark)

    Korsholm, Stephan Erbs; Søndergaard, Hans; Ravn, Anders P.

    2013-01-01

    The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations - especially memory consumption - tend to exclude them from being used on a significant class of resource constrained embedded platforms. The con......The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations - especially memory consumption - tend to exclude them from being used on a significant class of resource constrained embedded platforms...... by integrating: (1) a lean virtual machine (HVM) without any external dependencies on POSIX-like libraries or other OS functionalities, (2) a hardware abstraction layer, implemented almost entirely in Java through the use of hardware objects, first level interrupt handlers, and native variables, and (3....... An evaluation of the presented solution shows that the miniCDj benchmark gets reduced to a size where it can run on resource constrained platforms....

  1. How well do different tracers constrain the firn diffusivity profile?

    Directory of Open Access Journals (Sweden)

    C. M. Trudinger

    2013-02-01

    Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in most cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH3CCl3, HFCs and 14CO2 are most useful for constraining molecular diffusivity, while &delta:15N2 is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO2 age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to assist in quantification of the uncertainties.

  2. Dynamics of Saxothuringian subduction channel/wedge constrained by phase equilibria modelling and micro-fabric analysis

    Czech Academy of Sciences Publication Activity Database

    Collett, S.; Štípská, P.; Kusbach, Vladimír; Schulmann, K.; Marciniak, G.

    2017-01-01

    Roč. 35, č. 3 (2017), s. 253-280 ISSN 0263-4929 Institutional support: RVO:67985530 Keywords : eclogite * Bohemian Massif * thermodynamic modelling * micro-fabric analysis * subduction and exhumation dynamics Subject RIV: DB - Geology ; Mineralogy OBOR OECD: Geology Impact factor: 3.594, year: 2016

  3. 21 CFR 888.3350 - Hip joint metal/polymer semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal/polymer semi-constrained cemented... HUMAN SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3350 Hip joint metal/polymer semi-constrained cemented prosthesis. (a) Identification. A hip joint metal/polymer semi...

  4. 21 CFR 888.3120 - Ankle joint metal/polymer non-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ankle joint metal/polymer non-constrained cemented... HUMAN SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3120 Ankle joint metal/polymer non-constrained cemented prosthesis. (a) Identification. An ankle joint metal/polymer non...

  5. Value, Cost, and Sharing: Open Issues in Constrained Clustering

    Science.gov (United States)

    Wagstaff, Kiri L.

    2006-01-01

    Clustering is an important tool for data mining, since it can identify major patterns or trends without any supervision (labeled data). Over the past five years, semi-supervised (constrained) clustering methods have become very popular. These methods began with incorporating pairwise constraints and have developed into more general methods that can learn appropriate distance metrics. However, several important open questions have arisen about which constraints are most useful, how they can be actively acquired, and when and how they should be propagated to neighboring points. This position paper describes these open questions and suggests future directions for constrained clustering research.

  6. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  7. Identification of different geologic units using fuzzy constrained resistivity tomography

    Science.gov (United States)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  8. Capacity Constrained Routing Algorithms for Evacuation Route Planning

    National Research Council Canada - National Science Library

    Lu, Qingsong; George, Betsy; Shekhar, Shashi

    2006-01-01

    .... In this paper, we propose a new approach, namely a capacity constrained routing planner which models capacity as a time series and generalizes shortest path algorithms to incorporate capacity constraints...

  9. Using BMDP and SPSS for a Q factor analysis.

    Science.gov (United States)

    Tanner, B A; Koning, S M

    1980-12-01

    While Euclidean distances and Q factor analysis may sometimes be preferred to correlation coefficients and cluster analysis for developing a typology, commercially available software does not always facilitate their use. Commands are provided for using BMDP and SPSS in a Q factor analysis with Euclidean distances.

  10. 21 CFR 888.3510 - Knee joint femorotibial metal/polymer constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint femorotibial metal/polymer constrained... Knee joint femorotibial metal/polymer constrained cemented prosthesis. (a) Identification. A knee joint... of a knee joint. The device limits translation or rotation in one or more planes and has components...

  11. 21 CFR 888.3100 - Ankle joint metal/composite semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ankle joint metal/composite semi-constrained... Ankle joint metal/composite semi-constrained cemented prosthesis. (a) Identification. An ankle joint... ankle joint. The device limits translation and rotation: in one or more planes via the geometry of its...

  12. Exploring Technostress: Results of a Large Sample Factor Analysis

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2016-06-01

    Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.

  13. The effect of feature-based attention on flanker interference processing: An fMRI-constrained source analysis.

    Science.gov (United States)

    Siemann, Julia; Herrmann, Manfred; Galashan, Daniela

    2018-01-25

    The present study examined whether feature-based cueing affects early or late stages of flanker conflict processing using EEG and fMRI. Feature cues either directed participants' attention to the upcoming colour of the target or were neutral. Validity-specific modulations during interference processing were investigated using the N200 event-related potential (ERP) component and BOLD signal differences. Additionally, both data sets were integrated using an fMRI-constrained source analysis. Finally, the results were compared with a previous study in which spatial instead of feature-based cueing was applied to an otherwise identical flanker task. Feature-based and spatial attention recruited a common fronto-parietal network during conflict processing. Irrespective of attention type (feature-based; spatial), this network responded to focussed attention (valid cueing) as well as context updating (invalid cueing), hinting at domain-general mechanisms. However, spatially and non-spatially directed attention also demonstrated domain-specific activation patterns for conflict processing that were observable in distinct EEG and fMRI data patterns as well as in the respective source analyses. Conflict-specific activity in visual brain regions was comparable between both attention types. We assume that the distinction between spatially and non-spatially directed attention types primarily applies to temporal differences (domain-specific dynamics) between signals originating in the same brain regions (domain-general localization).

  14. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    Science.gov (United States)

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  15. Chance constrained uncertain classification via robust optimization

    NARCIS (Netherlands)

    Ben-Tal, A.; Bhadra, S.; Bhattacharayya, C.; Saketha Nat, J.

    2011-01-01

    This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out

  16. 21 CFR 888.3358 - Hip joint metal/polymer/metal semi-constrained porous-coated uncemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal/polymer/metal semi-constrained... Devices § 888.3358 Hip joint metal/polymer/metal semi-constrained porous-coated uncemented prosthesis. (a) Identification. A hip joint metal/polymer/metal semi-constrained porous-coated uncemented prosthesis is a device...

  17. Exploring Technostress: Results of a Large Sample Factor Analysis

    OpenAIRE

    Jonušauskas, Steponas; Raišienė, Agota Giedrė

    2016-01-01

    With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...

  18. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    Science.gov (United States)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  19. Constraining the Mechanism of D" Anisotropy: Diversity of Observation Types Required

    Science.gov (United States)

    Creasy, N.; Pisconti, A.; Long, M. D.; Thomas, C.

    2017-12-01

    A variety of different mechanisms have been proposed as explanations for seismic anisotropy at the base of the mantle, including crystallographic preferred orientation of various minerals (bridgmanite, post-perovskite, and ferropericlase) and shape preferred orientation of elastically distinct materials such as partial melt. Investigations of the mechanism for D" anisotropy are usually ambiguous, as seismic observations rarely (if ever) uniquely constrain a mechanism. Observations of shear wave splitting and polarities of SdS and PdP reflections off the D" discontinuity are among our best tools for probing D" anisotropy; however, typical data sets cannot constrain a unique scenario suggested by the mineral physics literature. In this work, we determine what types of body wave observations are required to uniquely constrain a mechanism for D" anisotropy. We test multiple possible models based on both single-crystal and poly-phase elastic tensors provided by mineral physics studies. We predict shear wave splitting parameters for SKS, SKKS, and ScS phases and reflection polarities off the D" interface for a range of possible propagation directions. We run a series of tests that create synthetic data sets by random selection over multiple iterations, controlling the total number of measurements, the azimuthal distribution, and the type of phases. We treat each randomly drawn synthetic dataset with the same methodology as in Ford et al. (2015) to determine the possible mechanism(s), carrying out a grid search over all possible elastic tensors and orientations to determine which are consistent with the synthetic data. We find is it difficult to uniquely constrain the starting model with a realistic number of seismic anisotropy measurements with only one measurement technique or phase type. However, having a mix of SKS, SKKS, and ScS measurements, or a mix of shear wave splitting and reflection polarity measurements, dramatically increases the probability of uniquely

  20. How CMB and large-scale structure constrain chameleon interacting dark energy

    International Nuclear Information System (INIS)

    Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y.

    2015-01-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength, can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H 0 tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H 0 value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys

  1. How CMB and large-scale structure constrain chameleon interacting dark energy

    Energy Technology Data Exchange (ETDEWEB)

    Boriero, Daniel [Fakultät für Physik, Universität Bielefeld, Universitätstr. 25, Bielefeld (Germany); Das, Subinoy [Indian Institute of Astrophisics, Bangalore, 560034 (India); Wong, Yvonne Y.Y., E-mail: boriero@physik.uni-bielefeld.de, E-mail: subinoy@iiap.res.in, E-mail: yvonne.y.wong@unsw.edu.au [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)

    2015-07-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength, can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H{sub 0} tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H{sub 0} value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys.

  2. A real-time Java tool chain for resource constrained platforms

    DEFF Research Database (Denmark)

    Korsholm, Stephan E.; Søndergaard, Hans; Ravn, Anders Peter

    2014-01-01

    The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations – especially memory consumption – tend to exclude them from being used on a significant class of resource constrained embedded platforms. The con......The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations – especially memory consumption – tend to exclude them from being used on a significant class of resource constrained embedded platforms...... by integrating the following: (1) a lean virtual machine without any external dependencies on POSIX-like libraries or other OS functionalities; (2) a hardware abstraction layer, implemented almost entirely in Java through the use of hardware objects, first level interrupt handlers, and native variables; and (3....... An evaluation of the presented solution shows that the miniCDj benchmark gets reduced to a size where it can run on resource constrained platforms....

  3. 21 CFR 888.3340 - Hip joint metal/composite semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal/composite semi-constrained... Hip joint metal/composite semi-constrained cemented prosthesis. (a) Identification. A hip joint metal... hip joint. The device limits translation and rotation in one or more planes via the geometry of its...

  4. Early failure mechanisms of constrained tripolar acetabular sockets used in revision total hip arthroplasty.

    Science.gov (United States)

    Cooke, Christopher C; Hozack, William; Lavernia, Carlos; Sharkey, Peter; Shastri, Shani; Rothman, Richard H

    2003-10-01

    Fifty-eight patients received an Osteonics constrained acetabular implant for recurrent instability (46), girdlestone reimplant (8), correction of leg lengthening (3), and periprosthetic fracture (1). The constrained liner was inserted into a cementless shell (49), cemented into a pre-existing cementless shell (6), cemented into a cage (2), and cemented directly into the acetabular bone (1). Eight patients (13.8%) required reoperation for failure of the constrained implant. Type I failure (bone-prosthesis interface) occurred in 3 cases. Two cementless shells became loose, and in 1 patient, the constrained liner was cemented into an acetabular cage, which then failed by pivoting laterally about the superior fixation screws. Type II failure (liner locking mechanism) occurred in 2 cases. Type III failure (femoral head locking mechanism) occurred in 3 patients. Seven of the 8 failures occurred in patients with recurrent instability. Constrained liners are an effective method for treatment during revision total hip arthroplasty but should be used in select cases only.

  5. Inhibition of human thymidine phosphorylase by conformationally constrained pyrimidine nucleoside phosphonic acids and their "open-structure" isosteres

    Czech Academy of Sciences Publication Activity Database

    Kóšiová, Ivana; Šimák, Ondřej; Panova, Natalya; Buděšínský, Miloš; Petrová, Magdalena; Rejman, Dominik; Liboska, Radek; Páv, Ondřej; Rosenberg, Ivan

    2014-01-01

    Roč. 74, Mar 3 (2014), s. 145-168 ISSN 0223-5234 R&D Projects: GA ČR GA203/09/0820; GA ČR GA202/09/0193; GA ČR GA13-24880S; GA ČR GA13-26526S Institutional support: RVO:61388963 Keywords : phosphonate * conformationally constrained nucleotide analog * human thymidine phosphorylase * PBMC * bi-substrate-like inhibitor * Michael addition Subject RIV: CC - Organic Chemistry Impact factor: 3.447, year: 2014

  6. Factor analysis improves the selection of prescribing indicators

    DEFF Research Database (Denmark)

    Rasmussen, Hanne Marie Skyggedal; Søndergaard, Jens; Sokolowski, Ineta

    2006-01-01

    OBJECTIVE: To test a method for improving the selection of indicators of general practitioners' prescribing. METHODS: We conducted a prescription database study including all 180 general practices in the County of Funen, Denmark, approximately 472,000 inhabitants. Principal factor analysis was us...... appropriate and inappropriate prescribing, as revealed by the correlation of the indicators in the first factor. CONCLUSION: Correlation and factor analysis is a feasible method that assists the selection of indicators and gives better insight into prescribing patterns....

  7. Constrained Fisher Scoring for a Mixture of Factor Analyzers

    Science.gov (United States)

    2016-09-01

    where ω ∈ [0, 4π). Each observation of the spiral is corrupted by additive white Gaussian noise with unit variance. This model was used in previous works...that notwithstanding any other provision of  law, no person shall be subject to any penalty  for  failing  to comply with a collection of  information...from different aspects and then learn a joint statistical model for the object manifold. We employ a mixture of factor analyzers model and derive a

  8. Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann

    of video quality. We proposed a new metric for objective quality assessment that considers frame rate. As many applications deal with wireless video transmission, we performed an analysis of compression and transmission systems with a focus on power-distortion trade-off. We proposed an approach...... for ratedistortion-complexity optimization of upcoming video compression standard HEVC. We also provided a new method allowing decrease of power consumption on mobile devices in 3G networks. Finally, we proposed low-delay and low-power approaches for video transmission over wireless personal area networks, including......Constrained resources like memory, power, bandwidth and delay requirements in many mobile systems pose limitations for video applications. Standard approaches for video compression and transmission do not always satisfy system requirements. In this thesis we have shown that it is possible to modify...

  9. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Science.gov (United States)

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.

  10. Constrained Versions of DEDICOM for Use in Unsupervised Part-Of-Speech Tagging

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel; Peter A. Chew

    2016-05-01

    This reports describes extensions of DEDICOM (DEcomposition into DIrectional COMponents) data models [3] that incorporate bound and linear constraints. The main purpose of these extensions is to investigate the use of improved data models for unsupervised part-of-speech tagging, as described by Chew et al. [2]. In that work, a single domain, two-way DEDICOM model was computed on a matrix of bigram fre- quencies of tokens in a corpus and used to identify parts-of-speech as an unsupervised approach to that problem. An open problem identi ed in that work was the com- putation of a DEDICOM model that more closely resembled the matrices used in a Hidden Markov Model (HMM), speci cally through post-processing of the DEDICOM factor matrices. The work reported here consists of the description of several models that aim to provide a direct solution to that problem and a way to t those models. The approach taken here is to incorporate the model requirements as bound and lin- ear constrains into the DEDICOM model directly and solve the data tting problem as a constrained optimization problem. This is in contrast to the typical approaches in the literature, where the DEDICOM model is t using unconstrained optimization approaches, and model requirements are satis ed as a post-processing step.

  11. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Directory of Open Access Journals (Sweden)

    Jianzhong Wang

    Full Text Available Recently, Sparse Representation-based Classification (SRC has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW demonstrate the effectiveness of LCJDSRC.

  12. Human factors analysis of incident/accident report

    International Nuclear Information System (INIS)

    Kuroda, Isao

    1992-01-01

    Human factors analysis of accident/incident has different kinds of difficulties in not only technical, but also psychosocial background. This report introduces some experiments of 'Variation diagram method' which is able to extend to operational and managemental factors. (author)

  13. Text mining factor analysis (TFA) in green tea patent data

    Science.gov (United States)

    Rahmawati, Sela; Suprijadi, Jadi; Zulhanif

    2017-03-01

    Factor analysis has become one of the most widely used multivariate statistical procedures in applied research endeavors across a multitude of domains. There are two main types of analyses based on factor analysis: Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA). Both EFA and CFA aim to observed relationships among a group of indicators with a latent variable, but they differ fundamentally, a priori and restrictions made to the factor model. This method will be applied to patent data technology sector green tea to determine the development technology of green tea in the world. Patent analysis is useful in identifying the future technological trends in a specific field of technology. Database patent are obtained from agency European Patent Organization (EPO). In this paper, CFA model will be applied to the nominal data, which obtain from the presence absence matrix. While doing processing, analysis CFA for nominal data analysis was based on Tetrachoric matrix. Meanwhile, EFA model will be applied on a title from sector technology dominant. Title will be pre-processing first using text mining analysis.

  14. Quasicanonical structure of optimal control in constrained discrete systems

    Science.gov (United States)

    Sieniutycz, S.

    2003-06-01

    This paper considers discrete processes governed by difference rather than differential equations for the state transformation. The basic question asked is if and when Hamiltonian canonical structures are possible in optimal discrete systems. Considering constrained discrete control, general optimization algorithms are derived that constitute suitable theoretical and computational tools when evaluating extremum properties of constrained physical models. The mathematical basis of the general theory is the Bellman method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage criterion which allows a variation of the terminal state that is otherwise fixed in the Bellman's method. Two relatively unknown, powerful optimization algorithms are obtained: an unconventional discrete formalism of optimization based on a Hamiltonian for multistage systems with unconstrained intervals of holdup time, and the time interval constrained extension of the formalism. These results are general; namely, one arrives at: the discrete canonical Hamilton equations, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory along with all basic results of variational calculus. Vast spectrum of applications of the theory is briefly discussed.

  15. A Dynamic Programming Approach to Constrained Portfolios

    DEFF Research Database (Denmark)

    Kraft, Holger; Steffensen, Mogens

    2013-01-01

    This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies...

  16. q-Deformed KP Hierarchy and q-Deformed Constrained KP Hierarchy

    OpenAIRE

    He, Jingsong; Li, Yinghua; Cheng, Yi

    2006-01-01

    Using the determinant representation of gauge transformation operator, we have shown that the general form of $au$ function of the $q$-KP hierarchy is a $q$-deformed generalized Wronskian, which includes the $q$-deformed Wronskian as a special case. On the basis of these, we study the $q$-deformed constrained KP ($q$-cKP) hierarchy, i.e. $l$-constraints of $q$-KP hierarchy. Similar to the ordinary constrained KP (cKP) hierarchy, a large class of solutions of $q$-cKP hierarchy can be represent...

  17. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M.A. Wasiolek

    2005-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objectives of this analysis are to develop BDCFs for the

  18. Constrained dynamics of universally coupled massive spin 2-spin 0 gravities

    International Nuclear Information System (INIS)

    Pitts, J Brian

    2006-01-01

    The 2-parameter family of massive variants of Einsteins gravity (on a Minkowski background) found by Ogievetsky and Polubarinov by excluding lower spins can also be derived using universal coupling. A Dirac-Bergmann constrained dynamics analysis seems not to have been presented for these theories, the Freund-Maheshwari-Schonberg special case, or any other massive gravity beyond the linear level treated by Marzban, Whiting and van Dam. Here the Dirac-Bergmann apparatus is applied to these theories. A few remarks are made on the question of positive energy. Being bimetric, massive gravities have a causality puzzle, but it appears soluble by the introduction and judicious use of gauge freedom

  19. Analysis of technological, institutional and socioeconomic factors ...

    African Journals Online (AJOL)

    Analysis of technological, institutional and socioeconomic factors that influences poor reading culture among secondary school students in Nigeria. ... Proliferation and availability of smart phones, chatting culture and social media were identified as technological factors influencing poor reading culture among secondary ...

  20. Exploratory Analysis of the Factors Affecting Consumer Choice in E-Commerce: Conjoint Analysis

    Directory of Open Access Journals (Sweden)

    Elena Mazurova

    2017-05-01

    Full Text Available According to previous studies of online consumer behaviour, three factors are the most influential on purchasing behavior - brand, colour and position of the product on the screen. However, a simultaneous influence of these three factors on the consumer decision making process has not been investigated previously. In this particular work we aim to execute a comprehensive study of the influence of these three factors. In order to answer our main research questions, we conducted an experiment with 96 different combinations of the three attributes, and using statistical analysis, such as conjoint analysis, t-test analysis and Kendall analysis we identified that the most influential factor to the online consumer decision making process is brand, the second most important attribute is the colour, which was estimated half as important as brand, and the least important attribute is the position on the screen. Additionally, we identified the main differences regarding consumers stated and revealed preferences regarding these three attributes.

  1. Constraining the dark energy models with H (z ) data: An approach independent of H0

    Science.gov (United States)

    Anagnostopoulos, Fotios K.; Basilakos, Spyros

    2018-03-01

    We study the performance of the latest H (z ) data in constraining the cosmological parameters of different cosmological models, including that of Chevalier-Polarski-Linder w0w1 parametrization. First, we introduce a statistical procedure in which the chi-square estimator is not affected by the value of the Hubble constant. As a result, we find that the H (z ) data do not rule out the possibility of either nonflat models or dynamical dark energy cosmological models. However, we verify that the time varying equation-of-state parameter w (z ) is not constrained by the current expansion data. Combining the H (z ) and the Type Ia supernova data, we find that the H (z )/SNIa overall statistical analysis provides a substantial improvement of the cosmological constraints with respect to those of the H (z ) analysis. Moreover, the w0-w1 parameter space provided by the H (z )/SNIa joint analysis is in very good agreement with that of Planck 2015, which confirms that the present analysis with the H (z ) and supernova type Ia (SNIa) probes correctly reveals the expansion of the Universe as found by the team of Planck. Finally, we generate sets of Monte Carlo realizations in order to quantify the ability of the H (z ) data to provide strong constraints on the dark energy model parameters. The Monte Carlo approach shows significant improvement of the constraints, when increasing the sample to 100 H (z ) measurements. Such a goal can be achieved in the future, especially in the light of the next generation of surveys.

  2. A factor analysis to find critical success factors in retail brand

    Directory of Open Access Journals (Sweden)

    Naser Azad

    2013-03-01

    Full Text Available The present exploratory study aims to find critical components of retail brand among some retail stores. The study seeks to build a brand name in retail level and looks to find important factors affecting it. Customer behavior is largely influenced when the first retail customer experience is formed. These factors have direct impacts on customer experience and satisfaction in retail industry. The proposed study performs an empirical investigation on two well-known retain stores located in city of Tehran, Iran. Using a sample of 265 people from regular customers, the study uses factor analysis and extracts four main factors including related brand, product benefits, customer welfare strategy and corporate profits using the existing 31 factors in the literature.

  3. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.

    2013-01-01

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....

  4. Explaining evolution via constrained persistent perfect phylogeny

    Science.gov (United States)

    2014-01-01

    Background The perfect phylogeny is an often used model in phylogenetics since it provides an efficient basic procedure for representing the evolution of genomic binary characters in several frameworks, such as for example in haplotype inference. The model, which is conceptually the simplest, is based on the infinite sites assumption, that is no character can mutate more than once in the whole tree. A main open problem regarding the model is finding generalizations that retain the computational tractability of the original model but are more flexible in modeling biological data when the infinite site assumption is violated because of e.g. back mutations. A special case of back mutations that has been considered in the study of the evolution of protein domains (where a domain is acquired and then lost) is persistency, that is the fact that a character is allowed to return back to the ancestral state. In this model characters can be gained and lost at most once. In this paper we consider the computational problem of explaining binary data by the Persistent Perfect Phylogeny model (referred as PPP) and for this purpose we investigate the problem of reconstructing an evolution where some constraints are imposed on the paths of the tree. Results We define a natural generalization of the PPP problem obtained by requiring that for some pairs (character, species), neither the species nor any of its ancestors can have the character. In other words, some characters cannot be persistent for some species. This new problem is called Constrained PPP (CPPP). Based on a graph formulation of the CPPP problem, we are able to provide a polynomial time solution for the CPPP problem for matrices whose conflict graph has no edges. Using this result, we develop a parameterized algorithm for solving the CPPP problem where the parameter is the number of characters. Conclusions A preliminary experimental analysis shows that the constrained persistent perfect phylogeny model allows to

  5. Composite Differential Evolution with Modified Oracle Penalty Method for Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Minggang Dong

    2014-01-01

    Full Text Available Motivated by recent advancements in differential evolution and constraints handling methods, this paper presents a novel modified oracle penalty function-based composite differential evolution (MOCoDE for constrained optimization problems (COPs. More specifically, the original oracle penalty function approach is modified so as to satisfy the optimization criterion of COPs; then the modified oracle penalty function is incorporated in composite DE. Furthermore, in order to solve more complex COPs with discrete, integer, or binary variables, a discrete variable handling technique is introduced into MOCoDE to solve complex COPs with mix variables. This method is assessed on eleven constrained optimization benchmark functions and seven well-studied engineering problems in real life. Experimental results demonstrate that MOCoDE achieves competitive performance with respect to some other state-of-the-art approaches in constrained optimization evolutionary algorithms. Moreover, the strengths of the proposed method include few parameters and its ease of implementation, rendering it applicable to real life. Therefore, MOCoDE can be an efficient alternative to solving constrained optimization problems.

  6. Time-constrained project scheduling with adjacent resources

    NARCIS (Netherlands)

    Hurink, Johann L.; Kok, A.L.; Paulus, J.J.; Schutten, Johannes M.J.

    We develop a decomposition method for the Time-Constrained Project Scheduling Problem (TCPSP) with adjacent resources. For adjacent resources the resource units are ordered and the units assigned to a job have to be adjacent. On top of that, adjacent resources are not required by single jobs, but by

  7. Time-constrained project scheduling with adjacent resources

    NARCIS (Netherlands)

    Hurink, Johann L.; Kok, A.L.; Paulus, J.J.; Schutten, Johannes M.J.

    2008-01-01

    We develop a decomposition method for the Time-Constrained Project Scheduling Problem (TCPSP) with Adjacent Resources. For adjacent resources the resource units are ordered and the units assigned to a job have to be adjacent. On top of that, adjacent resources are not required by single jobs, but by

  8. Integrating job scheduling and constrained network routing

    DEFF Research Database (Denmark)

    Gamst, Mette

    2010-01-01

    This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number...

  9. Constraining supergravity models from gluino production

    International Nuclear Information System (INIS)

    Barbieri, R.; Gamberini, G.; Giudice, G.F.; Ridolfi, G.

    1988-01-01

    The branching ratios for gluino decays g tilde → qanti qΧ, g tilde → gΧ into a stable undetected neutralino are computed as functions of the relevant parameters of the underlying supergravity theory. A simple way of constraining supergravity models from gluino production emerges. The effectiveness of hadronic versus e + e - colliders in the search for supersymmetry can be directly compared. (orig.)

  10. Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xiaoxue Feng

    2014-11-01

    Full Text Available Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS, which gets better filtering performance than NILS without constraint.

  11. Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks

    Science.gov (United States)

    Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng

    2014-01-01

    Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint. PMID:25390408

  12. Constrained state estimation for individual localization in wireless body sensor networks.

    Science.gov (United States)

    Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng

    2014-11-10

    Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint.

  13. Regional Responses to Constrained Water Availability

    Science.gov (United States)

    Cui, Y.; Calvin, K. V.; Hejazi, M. I.; Clarke, L.; Kim, S. H.; Patel, P.

    2017-12-01

    There have been many concerns about water as a constraint to agricultural production, electricity generation, and many other human activities in the coming decades. Nevertheless, how different countries/economies would respond to such constraints has not been explored. Here, we examine the responding mechanism of binding water availability constraints at the water basin level and across a wide range of socioeconomic, climate and energy technology scenarios. Specifically, we look at the change in water withdrawals between energy, land-use and other sectors within an integrated framework, by using the Global Change Assessment Model (GCAM) that also endogenizes water use and allocation decisions based on costs. We find that, when water is taken into account as part of the production decision-making, countries/basins in general fall into three different categories, depending on the change of water withdrawals and water re-allocation between sectors. First, water is not a constraining factor for most of the basins. Second, advancements in water-saving technologies of the electricity generation cooling systems are sufficient of reducing water withdrawals to meet binding water availability constraints, such as in China and the EU-15. Third, water-saving in the electricity sector alone is not sufficient and thus cannot make up the lowered water availability from the binding case; for example, many basins in Pakistan, Middle East and India have to largely reduce irrigated water withdrawals by either switching to rain-fed agriculture or reducing production. The dominant responding strategy for individual countries/basins is quite robust across the range of alternate scenarios that we test. The relative size of water withdrawals between energy and agriculture sectors is one of the most important factors that affect the dominant mechanism.

  14. Factor analysis for exercise stress radionuclide ventriculography

    International Nuclear Information System (INIS)

    Hirota, Kazuyoshi; Yasuda, Mitsutaka; Oku, Hisao; Ikuno, Yoshiyasu; Takeuchi, Kazuhide; Takeda, Tadanao; Ochi, Hironobu

    1987-01-01

    Using factor analysis, a new image processing in exercise stress radionuclide ventriculography, changes in factors associated with exercise were evaluated in 14 patients with angina pectoris or old myocardial infarction. The patients were imaged in the left anterior oblique projection, and three factor images were presented on a color coded scale. Abnormal factors (AF) were observed in 6 patients before exercise, 13 during exercise, and 4 after exercise. In 7 patients, the occurrence of AF was associated with exercise. Five of them became free from AF after exercise. Three patients showing AF before exercise had aggravation of AF during exercise. Overall, the occurrence or aggravation of AF was associated with exercise in ten (71 %) of the patients. The other three patients, however, had disappearance of AF during exercise. In the last patient, none of the AF was observed throughout the study. In view of a high incidence of AF associated with exercise, the factor analysis may have the potential in evaluating cardiac reverse from the viewpoint of left ventricular wall motion abnormality. (Namekawa, K.)

  15. Prior image constrained image reconstruction in emerging computed tomography applications

    Science.gov (United States)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation

  16. Neutron charge radius and the neutron electric form factor

    International Nuclear Information System (INIS)

    Gentile, T. R.; Crawford, C. B.

    2011-01-01

    For nearly forty years, the Galster parametrization has been employed to fit existing data for the neutron electric form factor, G E n , vs the square of the four-momentum transfer, Q 2 . Typically this parametrization is constrained to be consistent with experimental data for the neutron charge radius. However, we find that the Galster form does not have sufficient freedom to accommodate reasonable values of the radius without constraining or compromising the fit. In addition, the G E n data are now at sufficient precision to motivate a two-parameter fit (or three parameters if we include thermal neutron data). Here we present a modified form of a two-dipole parametrization that allows this freedom and fits both G E n (including recent data at both low and high four-momentum transfer) and the charge radius well with simple, well-defined parameters. Analysis reveals that the Galster form is essentially a two-parameter approximation to the two-dipole form but becomes degenerate if we try to extend it naturally to three parameters.

  17. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  18. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  19. Models of Flux Tubes from Constrained Relaxation

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...

  20. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M.A. Wasiolek

    2003-07-25

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports (BSC 2003 [DIRS 160964]; BSC 2003 [DIRS 160965]; BSC 2003 [DIRS 160976]; BSC 2003 [DIRS 161239]; BSC 2003 [DIRS 161241]) contain detailed description of the model input parameters. This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs and conversion factors for the TSPA. The BDCFs will be used in performance assessment for calculating annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose from beta- and photon-emitting radionuclides.

  1. 21 CFR 888.3550 - Knee joint patellofemorotibial polymer/metal/metal constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint patellofemorotibial polymer/metal/metal... § 888.3550 Knee joint patellofemorotibial polymer/metal/metal constrained cemented prosthesis. (a) Identification. A knee joint patellofemorotibial polymer/metal/metal constrained cemented prosthesis is a device...

  2. 21 CFR 888.3490 - Knee joint femorotibial metal/composite non-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint femorotibial metal/composite non... § 888.3490 Knee joint femorotibial metal/composite non-constrained cemented prosthesis. (a) Identification. A knee joint femorotibial metal/composite non-constrained cemented prosthesis is a device...

  3. Hand function evaluation: a factor analysis study.

    Science.gov (United States)

    Jarus, T; Poremba, R

    1993-05-01

    The purpose of this study was to investigate hand function evaluations. Factor analysis with varimax rotation was used to assess the fundamental characteristics of the items included in the Jebsen Hand Function Test and the Smith Hand Function Evaluation. The study sample consisted of 144 subjects without disabilities and 22 subjects with Colles fracture. Results suggest a four factor solution: Factor I--pinch movement; Factor II--grasp; Factor III--target accuracy; and Factor IV--activities of daily living. These categories differentiated the subjects without Colles fracture from the subjects with Colles fracture. A hand function evaluation consisting of these four factors would be useful. Such an evaluation that can be used for current clinical purposes is provided.

  4. Salivary SPECT and factor analysis in Sjoegren's syndrome

    International Nuclear Information System (INIS)

    Nakamura, T.; Oshiumi, Y.; Yonetsu, K.; Muranaka, T.; Sakai, K.; Kanda, S.; National Fukuoka Central Hospital

    1991-01-01

    Salivary SPECT and factor analysis in Sjoegren's syndrome were performed in 17 patients and 6 volunteers as controls. The ability of SPECT to detect small differences in the level of uptake can be used to separate glands from background even when uptake is reduced as in the patients with Sjoegren's syndrome. In control and probable Sjoegren's syndrome groups the uptake ratio of the submandibular gland to parotid gland on salivary SPECT (S/P ratio) was less than 1.0. However, in the definite Sjoergren's syndrome group, the ratio was more than 1.0. Moreover, the ratio in all patients with sialectasia, which is characteristic of Sjoegren's syndrome, was more than 1.0. Salivary factor analysis of normal parotid glands showed slowly increasing patterns of uptake and normal submandibular glands had rapidly increasing patterns of uptake. However, in the definite Sjoegren's syndrome group, the factor analysis patterns were altered, with slowly increasing patterns dominating both in the parotid and submandibular glands. These results suggest that the S/P ratio in salivary SPECT and salivary factor analysis provide additional radiologic criteria in diagnosing Sjoegren's syndrome. (orig.)

  5. Separating timing, movement conditions and individual differences in the analysis of human movement

    DEFF Research Database (Denmark)

    Raket, Lars Lau; Grimme, Britta; Schöner, Gregor

    2016-01-01

    mixed-effects models as viable alternatives to conventional analysis frameworks. The model is then combined with a novel factor-analysis model that estimates the low-dimensional subspace within which movements vary when the task demands vary. Our framework enables us to visualize different dimensions......A central task in the analysis of human movement behavior is to determine systematic patterns and differences across experimental conditions, participants and repetitions. This is possible because human movement is highly regular, being constrained by invariance principles. Movement timing...

  6. The Road Towards Lean Six Sigma: Sustainable Success Factors in Service Industry

    Directory of Open Access Journals (Sweden)

    Vouzas Fotis

    2014-11-01

    Full Text Available It has been widely investigated that the application of operations management techniques is not only based on technical factors, but it is mainly associated with organisational factors such as culture, previous polices and procedures, etc. A prime example of promisng operations practices is Lean Six Sigma (L6σ. The main research question for L6σ is related to its liabilities and constrains regarding its implementation. Therefore, this paper aims to explore the critical factors related to the application L6σ. The context of the analysis is service industry since it seems that it has been neglected from the literature that mainly focuses on manufacturing. The methodology was based on the qualitative exploration of three case studies from the service industry. Secondary data were collected through an analysis of companies' documents, written procedures and quality assurance policies and primary data were collected through a number of in-depth face-to-face interviews with managers and quality experts. The findings show that there are ten (10 particular factors that influence the implementation of L6σ in service organizations.

  7. 21 CFR 888.3320 - Hip joint metal/metal semi-constrained, with a cemented acetabular component, prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal/metal semi-constrained, with a... Devices § 888.3320 Hip joint metal/metal semi-constrained, with a cemented acetabular component, prosthesis. (a) Identification. A hip joint metal/metal semi-constrained, with a cemented acetabular...

  8. 21 CFR 888.3330 - Hip joint metal/metal semi-constrained, with an uncemented acetabular component, prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal/metal semi-constrained, with an... Devices § 888.3330 Hip joint metal/metal semi-constrained, with an uncemented acetabular component, prosthesis. (a) Identification. A hip joint metal/metal semi-constrained, with an uncemented acetabular...

  9. Dynamically constrained ensemble perturbations – application to tides on the West Florida Shelf

    Directory of Open Access Journals (Sweden)

    F. Lenartz

    2009-07-01

    Full Text Available A method is presented to create an ensemble of perturbations that satisfies linear dynamical constraints. A cost function is formulated defining the probability of each perturbation. It is shown that the perturbations created with this approach take the land-sea mask into account in a similar way as variational analysis techniques. The impact of the land-sea mask is illustrated with an idealized configuration of a barrier island. Perturbations with a spatially variable correlation length can be also created by this approach. The method is applied to a realistic configuration of the West Florida Shelf to create perturbations of the M2 tidal parameters for elevation and depth-averaged currents. The perturbations are weakly constrained to satisfy the linear shallow-water equations. Despite that the constraint is derived from an idealized assumption, it is shown that this approach is applicable to a non-linear and baroclinic model. The amplitude of spurious transient motions created by constrained perturbations of initial and boundary conditions is significantly lower compared to perturbing the variables independently or to using only the momentum equation to compute the velocity perturbations from the elevation.

  10. Sensitive Constrained Optimal PMU Allocation with Complete Observability for State Estimation Solution

    Directory of Open Access Journals (Sweden)

    R. Manam

    2017-12-01

    Full Text Available In this paper, a sensitive constrained integer linear programming approach is formulated for the optimal allocation of Phasor Measurement Units (PMUs in a power system network to obtain state estimation. In this approach, sensitive buses along with zero injection buses (ZIB are considered for optimal allocation of PMUs in the network to generate state estimation solutions. Sensitive buses are evolved from the mean of bus voltages subjected to increase of load consistently up to 50%. Sensitive buses are ranked in order to place PMUs. Sensitive constrained optimal PMU allocation in case of single line and no line contingency are considered in observability analysis to ensure protection and control of power system from abnormal conditions. Modeling of ZIB constraints is included to minimize the number of PMU network allocations. This paper presents optimal allocation of PMU at sensitive buses with zero injection modeling, considering cost criteria and redundancy to increase the accuracy of state estimation solution without losing observability of the whole system. Simulations are carried out on IEEE 14, 30 and 57 bus systems and results obtained are compared with traditional and other state estimation methods available in the literature, to demonstrate the effectiveness of the proposed method.

  11. Balancing computation and communication power in power constrained clusters

    Science.gov (United States)

    Piga, Leonardo; Paul, Indrani; Huang, Wei

    2018-05-29

    Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agent may be configured to reassign the unused power to the active nodes to expedite workload processing.

  12. Synthesis of conformationally constrained peptidomimetics using multicomponent reactions

    NARCIS (Netherlands)

    Scheffelaar, R.; Klein Nijenhuis, R.A.; Paravidino, M.; Lutz, M.; Spek, A.L.; Ehlers, A.W.; de Kanter, F.J.J.; Groen, M.B.; Orru, R.V.A.; Ruijter, E.

    2009-01-01

    A novel modular synthetic approach toward constrained peptidomimetics is reported. The approach involves a highly efficient three-step sequence including two multicomponent reactions, thus allowing unprecedented diversification of both the peptide moieties and the turn-inducing scaffold. The

  13. Fuzzy chance constrained linear programming model for scrap charge optimization in steel production

    DEFF Research Database (Denmark)

    Rong, Aiying; Lahdelma, Risto

    2008-01-01

    the uncertainty based on fuzzy set theory and constrain the failure risk based on a possibility measure. Consequently, the scrap charge optimization problem is modeled as a fuzzy chance constrained linear programming problem. Since the constraints of the model mainly address the specification of the product...

  14. 21 CFR 888.3530 - Knee joint femorotibial metal/polymer semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint femorotibial metal/polymer semi... § 888.3530 Knee joint femorotibial metal/polymer semi-constrained cemented prosthesis. (a) Identification. A knee joint femorotibial metal/polymer semi-constrained cemented prosthesis is a device intended...

  15. 21 CFR 888.3540 - Knee joint patellofemoral polymer/metal semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint patellofemoral polymer/metal semi... § 888.3540 Knee joint patellofemoral polymer/metal semi-constrained cemented prosthesis. (a) Identification. A knee joint patellofemoral polymer/metal semi-constrained cemented prosthesis is a two-part...

  16. 21 CFR 888.3500 - Knee joint femorotibial metal/composite semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint femorotibial metal/composite semi... § 888.3500 Knee joint femorotibial metal/composite semi-constrained cemented prosthesis. (a) Identification. A knee joint femorotibial metal/composite semi-constrained cemented prosthesis is a two-part...

  17. 21 CFR 888.3520 - Knee joint femorotibial metal/polymer non-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint femorotibial metal/polymer non... § 888.3520 Knee joint femorotibial metal/polymer non-constrained cemented prosthesis. (a) Identification. A knee joint femorotibial metal/polymer non-constrained cemented prosthesis is a device intended to...

  18. Pleiotropy constrains the evolution of protein but not regulatory sequences in a transcription regulatory network influencing complex social behaviours

    Directory of Open Access Journals (Sweden)

    Daria eMolodtsova

    2014-12-01

    Full Text Available It is increasingly apparent that genes and networks that influence complex behaviour are evolutionary conserved, which is paradoxical considering that behaviour is labile over evolutionary timescales. How does adaptive change in behaviour arise if behaviour is controlled by conserved, pleiotropic, and likely evolutionary constrained genes? Pleiotropy and connectedness are known to constrain the general rate of protein evolution, prompting some to suggest that the evolution of complex traits, including behaviour, is fuelled by regulatory sequence evolution. However, we seldom have data on the strength of selection on mutations in coding and regulatory sequences, and this hinders our ability to study how pleiotropy influences coding and regulatory sequence evolution. Here we use population genomics to estimate the strength of selection on coding and regulatory mutations for a transcriptional regulatory network that influences complex behaviour of honey bees. We found that replacement mutations in highly connected transcription factors and target genes experience significantly stronger negative selection relative to weakly connected transcription factors and targets. Adaptively evolving proteins were significantly more likely to reside at the periphery of the regulatory network, while proteins with signs of negative selection were near the core of the network. Interestingly, connectedness and network structure had minimal influence on the strength of selection on putative regulatory sequences for both transcription factors and their targets. Our study indicates that adaptive evolution of complex behaviour can arise because of positive selection on protein-coding mutations in peripheral genes, and on regulatory sequence mutations in both transcription factors and their targets throughout the network.

  19. Using exploratory factor analysis in personality research: Best-practice recommendations

    Directory of Open Access Journals (Sweden)

    Sumaya Laher

    2010-11-01

    Research purpose: This article presents more objective methods to determine the number of factors, most notably parallel analysis and Velicer’s minimum average partial (MAP. The benefits of rotation are also discussed. The article argues for more consistent use of Procrustes rotation and congruence coefficients in factor analytic studies. Motivation for the study: Exploratory factor analysis is often criticised for not being rigorous and objective enough in terms of the methods used to determine the number of factors, the rotations to be used and ultimately the validity of the factor structure. Research design, approach and method: The article adopts a theoretical stance to discuss the best-practice recommendations for factor analytic research in the field of psychology. Following this, an example located within personality assessment and using the NEO-PI-R specifically is presented. A total of 425 students at the University of the Witwatersrand completed the NEO-PI-R. These responses were subjected to a principal components analysis using varimax rotation. The rotated solution was subjected to a Procrustes rotation with Costa and McCrae’s (1992 matrix as the target matrix. Congruence coefficients were also computed. Main findings: The example indicates the use of the methods recommended in the article and demonstrates an objective way of determining the number of factors. It also provides an example of Procrustes rotation with coefficients of agreement as an indication of how factor analytic results may be presented more rigorously in local research. Practical/managerial implications: It is hoped that the recommendations in this article will have best-practice implications for both researchers and practitioners in the field who employ factor analysis regularly. Contribution/value-add: This article will prove useful to all researchers employing factor analysis and has the potential to set the trend for better use of factor analysis in the South African context.

  20. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M.A. Wasiolek

    2005-04-28

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis

  1. Decentralizing constrained-efficient allocations in the Lagos–Wright pure currency economy

    OpenAIRE

    Bajaj, Ayushi; Hu, Tai Wei; Rocheteau, Guillaume; Silva, Mario Rafael

    2017-01-01

    This paper offers two ways to decentralize the constrained-efficient allocation of the Lagos–Wright (2005) pure currency economy. The first way has divisible money, take-it-or-leave-it offers by buyers, and a transfer scheme financed by money creation. If agents are sufficiently patient, the first best is achieved for finite money growth rates. If agents are impatient, the equilibrium allocation approaches the constrained-efficient allocation asymptotically as the money growth rate tends to i...

  2. A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography

    Science.gov (United States)

    Sun, S.; Chen, C.; WANG, H.; Wang, Q.

    2014-12-01

    The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M

  3. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  4. Coherent states in constrained systems

    International Nuclear Information System (INIS)

    Nakamura, M.; Kojima, K.

    2001-01-01

    When quantizing the constrained systems, there often arise the quantum corrections due to the non-commutativity in the re-ordering of constraint operators in the products of operators. In the bosonic second-class constraints, furthermore, the quantum corrections caused by the uncertainty principle should be taken into account. In order to treat these corrections simultaneously, the alternative projection technique of operators is proposed by introducing the available minimal uncertainty states of the constraint operators. Using this projection technique together with the projection operator method (POM), these two kinds of quantum corrections were investigated

  5. GPS-based ionospheric tomography with a constrained adaptive ...

    Indian Academy of Sciences (India)

    Gauss weighted function is introduced to constrain the tomography system in the new method. It can resolve the ... the research focus in the fields of space geodesy and ... ment of GNSS such as GPS, Glonass, Galileo and. Compass, as these ...

  6. Consequences of biomechanically constrained tasks in the design and interpretation of synergy analyses.

    Science.gov (United States)

    Steele, Katherine M; Tresch, Matthew C; Perreault, Eric J

    2015-04-01

    Matrix factorization algorithms are commonly used to analyze muscle activity and provide insight into neuromuscular control. These algorithms identify low-dimensional subspaces, commonly referred to as synergies, which can describe variation in muscle activity during a task. Synergies are often interpreted as reflecting underlying neural control; however, it is unclear how these analyses are influenced by biomechanical and task constraints, which can also lead to low-dimensional patterns of muscle activation. The aim of this study was to evaluate whether commonly used algorithms and experimental methods can accurately identify synergy-based control strategies. This was accomplished by evaluating synergies from five common matrix factorization algorithms using muscle activations calculated from 1) a biomechanically constrained task using a musculoskeletal model and 2) without task constraints using random synergy activations. Algorithm performance was assessed by calculating the similarity between estimated synergies and those imposed during the simulations; similarities ranged from 0 (random chance) to 1 (perfect similarity). Although some of the algorithms could accurately estimate specified synergies without biomechanical or task constraints (similarity >0.7), with these constraints the similarity of estimated synergies decreased significantly (0.3-0.4). The ability of these algorithms to accurately identify synergies was negatively impacted by correlation of synergy activations, which are increased when substantial biomechanical or task constraints are present. Increased variability in synergy activations, which can be captured using robust experimental paradigms that include natural variability in motor activation patterns, improved identification accuracy but did not completely overcome effects of biomechanical and task constraints. These results demonstrate that a biomechanically constrained task can reduce the accuracy of estimated synergies and highlight

  7. Factor Analysis for Clustered Observations.

    Science.gov (United States)

    Longford, N. T.; Muthen, B. O.

    1992-01-01

    A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)

  8. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  9. Inferring Aggregated Functional Traits from Metagenomic Data Using Constrained Non-negative Matrix Factorization: Application to Fiber Degradation in the Human Gut Microbiota.

    Science.gov (United States)

    Raguideau, Sébastien; Plancade, Sandra; Pons, Nicolas; Leclerc, Marion; Laroche, Béatrice

    2016-12-01

    Whole Genome Shotgun (WGS) metagenomics is increasingly used to study the structure and functions of complex microbial ecosystems, both from the taxonomic and functional point of view. Gene inventories of otherwise uncultured microbial communities make the direct functional profiling of microbial communities possible. The concept of community aggregated trait has been adapted from environmental and plant functional ecology to the framework of microbial ecology. Community aggregated traits are quantified from WGS data by computing the abundance of relevant marker genes. They can be used to study key processes at the ecosystem level and correlate environmental factors and ecosystem functions. In this paper we propose a novel model based approach to infer combinations of aggregated traits characterizing specific ecosystemic metabolic processes. We formulate a model of these Combined Aggregated Functional Traits (CAFTs) accounting for a hierarchical structure of genes, which are associated on microbial genomes, further linked at the ecosystem level by complex co-occurrences or interactions. The model is completed with constraints specifically designed to exploit available genomic information, in order to favor biologically relevant CAFTs. The CAFTs structure, as well as their intensity in the ecosystem, is obtained by solving a constrained Non-negative Matrix Factorization (NMF) problem. We developed a multicriteria selection procedure for the number of CAFTs. We illustrated our method on the modelling of ecosystemic functional traits of fiber degradation by the human gut microbiota. We used 1408 samples of gene abundances from several high-throughput sequencing projects and found that four CAFTs only were needed to represent the fiber degradation potential. This data reduction highlighted biologically consistent functional patterns while providing a high quality preservation of the original data. Our method is generic and can be applied to other metabolic processes in

  10. Inferring Aggregated Functional Traits from Metagenomic Data Using Constrained Non-negative Matrix Factorization: Application to Fiber Degradation in the Human Gut Microbiota.

    Directory of Open Access Journals (Sweden)

    Sébastien Raguideau

    2016-12-01

    Full Text Available Whole Genome Shotgun (WGS metagenomics is increasingly used to study the structure and functions of complex microbial ecosystems, both from the taxonomic and functional point of view. Gene inventories of otherwise uncultured microbial communities make the direct functional profiling of microbial communities possible. The concept of community aggregated trait has been adapted from environmental and plant functional ecology to the framework of microbial ecology. Community aggregated traits are quantified from WGS data by computing the abundance of relevant marker genes. They can be used to study key processes at the ecosystem level and correlate environmental factors and ecosystem functions. In this paper we propose a novel model based approach to infer combinations of aggregated traits characterizing specific ecosystemic metabolic processes. We formulate a model of these Combined Aggregated Functional Traits (CAFTs accounting for a hierarchical structure of genes, which are associated on microbial genomes, further linked at the ecosystem level by complex co-occurrences or interactions. The model is completed with constraints specifically designed to exploit available genomic information, in order to favor biologically relevant CAFTs. The CAFTs structure, as well as their intensity in the ecosystem, is obtained by solving a constrained Non-negative Matrix Factorization (NMF problem. We developed a multicriteria selection procedure for the number of CAFTs. We illustrated our method on the modelling of ecosystemic functional traits of fiber degradation by the human gut microbiota. We used 1408 samples of gene abundances from several high-throughput sequencing projects and found that four CAFTs only were needed to represent the fiber degradation potential. This data reduction highlighted biologically consistent functional patterns while providing a high quality preservation of the original data. Our method is generic and can be applied to other

  11. Wavelet evolutionary network for complex-constrained portfolio rebalancing

    Science.gov (United States)

    Suganya, N. C.; Vijayalakshmi Pai, G. A.

    2012-07-01

    Portfolio rebalancing problem deals with resetting the proportion of different assets in a portfolio with respect to changing market conditions. The constraints included in the portfolio rebalancing problem are basic, cardinality, bounding, class and proportional transaction cost. In this study, a new heuristic algorithm named wavelet evolutionary network (WEN) is proposed for the solution of complex-constrained portfolio rebalancing problem. Initially, the empirical covariance matrix, one of the key inputs to the problem, is estimated using the wavelet shrinkage denoising technique to obtain better optimal portfolios. Secondly, the complex cardinality constraint is eliminated using k-means cluster analysis. Finally, WEN strategy with logical procedures is employed to find the initial proportion of investment in portfolio of assets and also rebalance them after certain period. Experimental studies of WEN are undertaken on Bombay Stock Exchange, India (BSE200 index, period: July 2001-July 2006) and Tokyo Stock Exchange, Japan (Nikkei225 index, period: March 2002-March 2007) data sets. The result obtained using WEN is compared with the only existing counterpart named Hopfield evolutionary network (HEN) strategy and also verifies that WEN performs better than HEN. In addition, different performance metrics and data envelopment analysis are carried out to prove the robustness and efficiency of WEN over HEN strategy.

  12. 21 CFR 888.3310 - Hip joint metal/polymer constrained cemented or uncemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal/polymer constrained cemented or... Hip joint metal/polymer constrained cemented or uncemented prosthesis. (a) Identification. A hip joint... replace a hip joint. The device prevents dislocation in more than one anatomic plane and has components...

  13. Analysis and optimization of the TWINKLE factoring device

    NARCIS (Netherlands)

    Lenstra, A.K.; Shamir, A.; Preneel, B.

    2000-01-01

    We describe an enhanced version of the TWINKLE factoring device and analyse to what extent it can be expected to speed up the sieving step of the Quadratic Sieve and Number Field Sieve factoring al- gorithms. The bottom line of our analysis is that the TWINKLE-assisted factorization of 768-bit

  14. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-08

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle

  15. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose

  16. Energy efficiency measures in China: A three-stage DEA analysis

    Science.gov (United States)

    Cai, Yu; Xiong, Siqin; Ma, Xiaoming

    2017-04-01

    This paper measures energy efficiency of 30 regions in China during 2010-2014 by using the three-stage data envelopment analysis (DEA) model. The results indict that environmental factors and random error both have significant impacts on energy efficiency. After eliminating these influences, the results present that the energy efficiency in developed regions is almost higher than that in undeveloped or resource-rich regions and low scale technical efficiency is the main constraining factor in inefficient regions. Based on the efficiency characteristics, this paper divides all regions into four types and provide differential energy strategies.

  17. Statistical mechanics of budget-constrained auctions

    OpenAIRE

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-01-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). Based on the cavity method of statistical mechanics, we introduce a message passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution,...

  18. Transforming Rubrics Using Factor Analysis

    Science.gov (United States)

    Baryla, Ed; Shelley, Gary; Trainor, William

    2012-01-01

    Student learning and program effectiveness is often assessed using rubrics. While much time and effort may go into their creation, it is equally important to assess how effective and efficient the rubrics actually are in terms of measuring competencies over a number of criteria. This study demonstrates the use of common factor analysis to identify…

  19. Constraining the interacting dark energy models from weak gravity conjecture and recent observations

    International Nuclear Information System (INIS)

    Chen Ximing; Wang Bin; Pan Nana; Gong Yungui

    2011-01-01

    We examine the effectiveness of the weak gravity conjecture in constraining the dark energy by comparing with observations. For general dark energy models with plausible phenomenological interactions between dark sectors, we find that although the weak gravity conjecture can constrain the dark energy, the constraint is looser than that from the observations.

  20. Theoretical calculation of reorganization energy for electron self-exchange reaction by constrained density functional theory and constrained equilibrium thermodynamics.

    Science.gov (United States)

    Ren, Hai-Sheng; Ming, Mei-Jun; Ma, Jian-Yi; Li, Xiang-Yuan

    2013-08-22

    Within the framework of constrained density functional theory (CDFT), the diabatic or charge localized states of electron transfer (ET) have been constructed. Based on the diabatic states, inner reorganization energy λin has been directly calculated. For solvent reorganization energy λs, a novel and reasonable nonequilibrium solvation model is established by introducing a constrained equilibrium manipulation, and a new expression of λs has been formulated. It is found that λs is actually the cost of maintaining the residual polarization, which equilibrates with the extra electric field. On the basis of diabatic states constructed by CDFT, a numerical algorithm using the new formulations with the dielectric polarizable continuum model (D-PCM) has been implemented. As typical test cases, self-exchange ET reactions between tetracyanoethylene (TCNE) and tetrathiafulvalene (TTF) and their corresponding ionic radicals in acetonitrile are investigated. The calculated reorganization energies λ are 7293 cm(-1) for TCNE/TCNE(-) and 5939 cm(-1) for TTF/TTF(+) reactions, agreeing well with available experimental results of 7250 cm(-1) and 5810 cm(-1), respectively.

  1. A Factor Analysis of the BSRI and the PAQ.

    Science.gov (United States)

    Edwards, Teresa A.; And Others

    Factor analysis of the Bem Sex Role Inventory (BSRI) and the Personality Attributes Questionnaire (PAQ) was undertaken to study the independence of the masculine and feminine scales within each instrument. Both instruments were administered to undergraduate education majors. Analysis of primary first and second order factors of the BSRI indicated…

  2. Identification of noise in linear data sets by factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, Ph.K.

    1982-01-01

    A technique which has the ability to identify bad data points, after the data has been generated, is classical factor analysis. The ability of classical factor analysis to identify two different types of data errors make it ideally suited for scanning large data sets. Since the results yielded by factor analysis indicate correlations between parameters, one must know something about the nature of the data set and the analytical techniques used to obtain it to confidentially isolate errors. (author)

  3. A Local Weighted Nearest Neighbor Algorithm and a Weighted and Constrained Least-Squared Method for Mixed Odor Analysis by Electronic Nose Systems

    Directory of Open Access Journals (Sweden)

    Jyuo-Min Shyu

    2010-11-01

    Full Text Available A great deal of work has been done to develop techniques for odor analysis by electronic nose systems. These analyses mostly focus on identifying a particular odor by comparing with a known odor dataset. However, in many situations, it would be more practical if each individual odorant could be determined directly. This paper proposes two methods for such odor components analysis for electronic nose systems. First, a K-nearest neighbor (KNN-based local weighted nearest neighbor (LWNN algorithm is proposed to determine the components of an odor. According to the component analysis, the odor training data is firstly categorized into several groups, each of which is represented by its centroid. The examined odor is then classified as the class of the nearest centroid. The distance between the examined odor and the centroid is calculated based on a weighting scheme, which captures the local structure of each predefined group. To further determine the concentration of each component, odor models are built by regressions. Then, a weighted and constrained least-squares (WCLS method is proposed to estimate the component concentrations. Experiments were carried out to assess the effectiveness of the proposed methods. The LWNN algorithm is able to classify mixed odors with different mixing ratios, while the WCLS method can provide good estimates on component concentrations.

  4. Applications of a constrained mechanics methodology in economics

    International Nuclear Information System (INIS)

    Janova, Jitka

    2011-01-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  5. Applications of a constrained mechanics methodology in economics

    Science.gov (United States)

    Janová, Jitka

    2011-11-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  6. Applications of a constrained mechanics methodology in economics

    Energy Technology Data Exchange (ETDEWEB)

    Janova, Jitka, E-mail: janova@mendelu.cz [Department of Theoretical Physics and Astrophysics, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic); Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemedelska 1, 613 00 Brno (Czech Republic)

    2011-11-15

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  7. "Factor Analysis Using ""R"""

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2013-02-01

    Full Text Available R (R Development Core Team, 2011 is a very powerful tool to analyze data, that is gaining in popularity due to its costs (its free and flexibility (its open-source. This article gives a general introduction to using R (i.e., loading the program, using functions, importing data. Then, using data from Canivez, Konold, Collins, and Wilson (2009, this article walks the user through how to use the program to conduct factor analysis, from both an exploratory and confirmatory approach.

  8. Robust model predictive control for constrained continuous-time nonlinear systems

    Science.gov (United States)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  9. Extended shadow test approach for constrained adaptive testing

    NARCIS (Netherlands)

    Veldkamp, Bernard P.; Ariel, A.

    2002-01-01

    Several methods have been developed for use on constrained adaptive testing. Item pool partitioning, multistage testing, and testlet-based adaptive testing are methods that perform well for specific cases of adaptive testing. The weighted deviation model and the Shadow Test approach can be more

  10. Quantification of GABAA receptors in the rat brain with [123I]Iomazenil SPECT from factor analysis-denoised images

    International Nuclear Information System (INIS)

    Tsartsalis, Stergios; Moulin-Sallanon, Marcelle; Dumas, Noé; Tournier, Benjamin B.; Ghezzi, Catherine; Charnay, Yves; Ginovart, Nathalie; Millet, Philippe

    2014-01-01

    Purpose: In vivo imaging of GABA A receptors is essential for the comprehension of psychiatric disorders in which the GABAergic system is implicated. Small animal SPECT provides a modality for in vivo imaging of the GABAergic system in rodents using [ 123 I]Iomazenil, an antagonist of the GABA A receptor. The goal of this work is to describe and evaluate different quantitative reference tissue methods that enable reliable binding potential (BP) estimations in the rat brain to be obtained. Methods: Five male Sprague–Dawley rats were used for [ 123 I]Iomazenil brain SPECT scans. Binding parameters were obtained with a one-tissue compartment model (1TC), a constrained two-tissue compartment model (2TC c ), the two-step Simplified Reference Tissue Model (SRTM2), Logan graphical analysis and analysis of delayed-activity images. In addition, we employed factor analysis (FA) to deal with noise in data. Results: BP ND obtained with SRTM2, Logan graphical analysis and delayed-activity analysis was highly correlated with BP F values obtained with 2TC c (r = 0.954 and 0.945 respectively, p c and SRTM2 in raw and FA-denoised images (r = 0.961 and 0.909 respectively, p ND values from raw images while scans of only 70 min are sufficient from FA-denoised images. These images are also associated with significantly lower standard errors of 2TC c and SRTM2 BP values. Conclusion: Reference tissue methods such as SRTM2 and Logan graphical analysis can provide equally reliable BP ND values from rat brain [ 123 I]Iomazenil SPECT. Acquisitions, however, can be much less time-consuming either with analysis of delayed activity obtained from a 20-minute scan 50 min after tracer injection or with FA-denoising of images

  11. 21 CFR 888.3560 - Knee joint patellofemorotibial polymer/metal/polymer semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint patellofemorotibial polymer/metal... Devices § 888.3560 Knee joint patellofemorotibial polymer/metal/polymer semi-constrained cemented prosthesis. (a) Identification. A knee joint patellofemorotibial polymer/metal/polymer semi-constrained...

  12. The Recoverability of P-Technique Factor Analysis

    Science.gov (United States)

    Molenaar, Peter C. M.; Nesselroade, John R.

    2009-01-01

    It seems that just when we are about to lay P-technique factor analysis finally to rest as obsolete because of newer, more sophisticated multivariate time-series models using latent variables--dynamic factor models--it rears its head to inform us that an obituary may be premature. We present the results of some simulations demonstrating that even…

  13. Human factor analysis and preventive countermeasures in nuclear power plant

    International Nuclear Information System (INIS)

    Li Ye

    2010-01-01

    Based on the human error analysis theory and the characteristics of maintenance in a nuclear power plant, human factors of maintenance in NPP are divided into three different areas: human, technology, and organization. Which is defined as individual factors, including psychological factors, physiological characteristics, health status, level of knowledge and interpersonal skills; The technical factors including technology, equipment, tools, working order, etc.; The organizational factors including management, information exchange, education, working environment, team building and leadership management,etc The analysis found that organizational factors can directly or indirectly affect the behavior of staff and technical factors, is the most basic human error factor. Based on this nuclear power plant to reduce human error and measures the response. (authors)

  14. The bounds of feasible space on constrained nonconvex quadratic programming

    Science.gov (United States)

    Zhu, Jinghao

    2008-03-01

    This paper presents a method to estimate the bounds of the radius of the feasible space for a class of constrained nonconvex quadratic programmingsE Results show that one may compute a bound of the radius of the feasible space by a linear programming which is known to be a P-problem [N. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica 4 (1984) 373-395]. It is proposed that one applies this method for using the canonical dual transformation [D.Y. Gao, Canonical duality theory and solutions to constrained nonconvex quadratic programming, J. Global Optimization 29 (2004) 377-399] for solving a standard quadratic programming problem.

  15. The Compton-thick Growth of Supermassive Black Holes constrained

    Science.gov (United States)

    Buchner, J.; Georgakakis, A.; Nandra, K.

    2017-10-01

    A heavily obscured growth phase of supermassive black holes (SMBH) is thought to be important in the co-evolution with galaxies. X-rays provide a clean and efficient selection of unobscured and obscured AGN. Recent work with deeper observations and improved analysis methodology allowed us to extend constraints to Compton-thick number densities. We present the first luminosity function of Compton-thick AGN at z=0.5-4 and constrain the overall mass density locked into black holes over cosmic time, a fundamental constraint for cosmological simulations. Recent studies including ours find that the obscuration is redshift and luminosity-dependent in a complex way, which rules out entire sets of obscurer models. A new paradigm, the radiation-lifted torus model, is proposed, in which the obscurer is Eddington-rate dependent and accretion creates and displaces torus clouds. We place observational limits on the behaviour of this mechanism.

  16. Topology Optimization for Minimizing the Resonant Response of Plates with Constrained Layer Damping Treatment

    Directory of Open Access Journals (Sweden)

    Zhanpeng Fang

    2015-01-01

    Full Text Available A topology optimization method is proposed to minimize the resonant response of plates with constrained layer damping (CLD treatment under specified broadband harmonic excitations. The topology optimization problem is formulated and the square of displacement resonant response in frequency domain at the specified point is considered as the objective function. Two sensitivity analysis methods are investigated and discussed. The derivative of modal damp ratio is not considered in the conventional sensitivity analysis method. An improved sensitivity analysis method considering the derivative of modal damp ratio is developed to improve the computational accuracy of the sensitivity. The evolutionary structural optimization (ESO method is used to search the optimal layout of CLD material on plates. Numerical examples and experimental results show that the optimal layout of CLD treatment on the plate from the proposed topology optimization using the conventional sensitivity analysis or the improved sensitivity analysis can reduce the displacement resonant response. However, the optimization method using the improved sensitivity analysis can produce a higher modal damping ratio than that using the conventional sensitivity analysis and develop a smaller displacement resonant response.

  17. Seismic Input Motion Determined from a Surface-Downhole Pair of Sensors: A Constrained Deconvolution Approach

    OpenAIRE

    Dino Bindi; Stefano Parolai; M. Picozzi; A. Ansal

    2010-01-01

    We apply a deconvolution approach to the problem of determining the input motion at the base of an instrumented borehole using only a pair of recordings, one at the borehole surface and the other at its bottom. To stabilize the bottom-tosurface spectral ratio, we apply an iterative regularization algorithm that allows us to constrain the solution to be positively defined and to have a finite time duration. Through the analysis of synthetic data, we show that the method is capab...

  18. Binary classification posed as a quadratically constrained quadratic ...

    Indian Academy of Sciences (India)

    Binary classification is posed as a quadratically constrained quadratic problem and solved using the proposed method. Each class in the binary classification problem is modeled as a multidimensional ellipsoid to forma quadratic constraint in the problem. Particle swarms help in determining the optimal hyperplane or ...

  19. Security constrained optimal power flow by modern optimization tools

    African Journals Online (AJOL)

    Security constrained optimal power flow by modern optimization tools. ... International Journal of Engineering, Science and Technology ... If you would like more information about how to print, save, and work with PDFs, Highwire Press ...

  20. Subspace Barzilai-Borwein Gradient Method for Large-Scale Bound Constrained Optimization

    International Nuclear Information System (INIS)

    Xiao Yunhai; Hu Qingjie

    2008-01-01

    An active set subspace Barzilai-Borwein gradient algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method. In this work, a nonmonotone line search strategy that guarantees global convergence is used. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known method SPG on a subset of bound constrained problems from CUTEr collection

  1. Spatial factors as contextual qualifiers of information seeking

    Directory of Open Access Journals (Sweden)

    R. Savolainen

    2006-01-01

    Full Text Available Introduction. This paper investigates the ways in which spatial factors have been approached in information seeking studies. The main attention was focused on studies discussing information seeking on the level of source selection and use. Method. Conceptual analysis of about 100 articles and books thematizing spatial issues of information seeking. Due to research economy, the main attention was paid to studies on everyday life information seeking. Results. Three major viewpoints were identified with regard to the degree of objectivity of spatial factors. The objectifying approach conceives of spatial factors as external and entity-like qualifiers that primarly constrain information seeking. The realistic-pragmatic approach emphasizes the ways in which the availabilty of information sources in different places such as daily work environments orient information seeking. The perspectivist approach focuses on how people subjectively assess the significance of various sources by means of spatial constructs such as information horizons. Conclusion. Spatial factors are centrally important contextual qualifiers of information seeking. There is a need to further explore the potential of the above viewpoints by relating the spatial and temporal factors of information seeking.

  2. Constraining dark energy with clusters: Complementarity with other probes

    International Nuclear Information System (INIS)

    Cunha, Carlos; Huterer, Dragan; Frieman, Joshua A.

    2009-01-01

    The Figure of Merit Science Working Group recently forecast the constraints on dark energy that will be achieved prior to the Joint Dark Energy Mission by ground-based experiments that exploit baryon acoustic oscillations, type Ia supernovae, and weak gravitational lensing. We show that cluster counts from ongoing and near-future surveys should provide robust, complementary dark energy constraints. In particular, we find that optimally combined optical and Sunyaev-Zel'dovich effect cluster surveys should improve the Dark Energy Task Force figure of merit for pre-Joint Dark Energy Mission projects by a factor of 2 even without prior knowledge of the nuisance parameters in the cluster mass-observable relation. Comparable improvements are achieved in the forecast precision of parameters specifying the principal component description of the dark energy equation of state parameter, as well as in the growth index γ. These results indicate that cluster counts can play an important complementary role in constraining dark energy and modified gravity even if the associated systematic errors are not strongly controlled.

  3. THE DUBINS TRAVELING SALESMAN PROBLEM WITH CONSTRAINED COLLECTING MANEUVERS

    Directory of Open Access Journals (Sweden)

    Petr Váňa

    2016-11-01

    Full Text Available In this paper, we introduce a variant of the Dubins traveling salesman problem (DTSP that is called the Dubins traveling salesman problem with constrained collecting maneuvers (DTSP-CM. In contrast to the ordinary formulation of the DTSP, in the proposed DTSP-CM, the vehicle is requested to visit each target by specified collecting maneuver to accomplish the mission. The proposed problem formulation is motivated by scenarios with unmanned aerial vehicles where particular maneuvers are necessary for accomplishing the mission, such as object dropping or data collection with sensor sensitive to changes in vehicle heading. We consider existing methods for the DTSP and propose its modifications to use these methods to address a variant of the introduced DTSP-CM, where the collecting maneuvers are constrained to straight line segments.

  4. Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions

    Science.gov (United States)

    Carpenter, J. Russell; Markley, F. Landis

    2012-01-01

    We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the

  5. Integrating human factors into process hazard analysis

    International Nuclear Information System (INIS)

    Kariuki, S.G.; Loewe, K.

    2007-01-01

    A comprehensive process hazard analysis (PHA) needs to address human factors. This paper describes an approach that systematically identifies human error in process design and the human factors that influence its production and propagation. It is deductive in nature and therefore considers human error as a top event. The combinations of different factors that may lead to this top event are analysed. It is qualitative in nature and is used in combination with other PHA methods. The method has an advantage because it does not look at the operator error as the sole contributor to the human failure within a system but a combination of all underlying factors

  6. In vitro transcription of a torsionally constrained template

    DEFF Research Database (Denmark)

    Bentin, Thomas; Nielsen, Peter E

    2002-01-01

    of torsionally constrained DNA by free RNAP. We asked whether or not a newly synthesized RNA chain would limit transcription elongation. For this purpose we developed a method to immobilize covalently closed circular DNA to streptavidin-coated beads via a peptide nucleic acid (PNA)-biotin conjugate in principle...

  7. GPS-based ionospheric tomography with a constrained adaptive ...

    Indian Academy of Sciences (India)

    According to the continuous smoothness of the variations of ionospheric electron density (IED) among neighbouring voxels, Gauss weighted function is introduced to constrain the tomography system in the new method. It can resolve the dependence on the initial values for those voxels without any GPS rays traversing them ...

  8. On Segal-Wilson's construction for the τ-functions of the constrained KP hierarchies

    International Nuclear Information System (INIS)

    Zhang You-jin.

    1994-06-01

    In this letter we study the constrained KP hierachies by employing Segal-Wilson's theory on the τ-functions of the KP hierarchy. We first describe the elements of the Grassmannian which correspond to solutions of the constrained KP hierarchy, and then we show how to construct its rational and soliton solutions from these elements of the Grassmannian. (author). 10 refs

  9. Geometrically constrained kinematic global navigation satellite systems positioning: Implementation and performance

    Science.gov (United States)

    Asgari, Jamal; Mohammadloo, Tannaz H.; Amiri-Simkooei, Ali Reza

    2015-09-01

    GNSS kinematic techniques are capable of providing precise coordinates in extremely short observation time-span. These methods usually determine the coordinates of an unknown station with respect to a reference one. To enhance the precision, accuracy, reliability and integrity of the estimated unknown parameters, GNSS kinematic equations are to be augmented by possible constraints. Such constraints could be derived from the geometric relation of the receiver positions in motion. This contribution presents the formulation of the constrained kinematic global navigation satellite systems positioning. Constraints effectively restrict the definition domain of the unknown parameters from the three-dimensional space to a subspace defined by the equation of motion. To test the concept of the constrained kinematic positioning method, the equation of a circle is employed as a constraint. A device capable of moving on a circle was made and the observations from 11 positions on the circle were analyzed. Relative positioning was conducted by considering the center of the circle as the reference station. The equation of the receiver's motion was rewritten in the ECEF coordinates system. A special attention is drawn onto how a constraint is applied to kinematic positioning. Implementing the constraint in the positioning process provides much more precise results compared to the unconstrained case. This has been verified based on the results obtained from the covariance matrix of the estimated parameters and the empirical results using kinematic positioning samples as well. The theoretical standard deviations of the horizontal components are reduced by a factor ranging from 1.24 to 2.64. The improvement on the empirical standard deviation of the horizontal components ranges from 1.08 to 2.2.

  10. Applications of factor analysis to electron and ion beam surface techniques

    International Nuclear Information System (INIS)

    Solomon, J.S.

    1987-01-01

    Factor analysis, a mathematical technique for extracting chemical information from matrices of data, is used to enhance Auger electron spectroscopy (AES), core level electron energy loss spectroscopy (EELS), ion scattering spectroscopy (ISS), and secondary ion mass spectroscopy (SIMS) in studies of interfaces, thin films, and surfaces. Several examples of factor analysis enhancement of chemical bonding variations in thin films and at interfaces studied with AES and SIMS are presented. Factor analysis is also shown to be of great benefit in quantifying electron and ion beam doses required to induce surface damage. Finally, examples are presented of the use of factor analysis to reconstruct elemental profiles when peaks of interest overlap each other during the course of depth profile analysis. (author)

  11. Online constrained model-based reinforcement learning

    CSIR Research Space (South Africa)

    Van Niekerk, B

    2017-08-01

    Full Text Available Constrained Model-based Reinforcement Learning Benjamin van Niekerk School of Computer Science University of the Witwatersrand South Africa Andreas Damianou∗ Amazon.com Cambridge, UK Benjamin Rosman Council for Scientific and Industrial Research, and School... MULTIPLE SHOOTING Using direct multiple shooting (Bock and Plitt, 1984), problem (1) can be transformed into a structured non- linear program (NLP). First, the time horizon [t0, t0 + T ] is partitioned into N equal subintervals [tk, tk+1] for k = 0...

  12. Hydrologic and hydraulic flood forecasting constrained by remote sensing data

    Science.gov (United States)

    Li, Y.; Grimaldi, S.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2017-12-01

    Flooding is one of the most destructive natural disasters, resulting in many deaths and billions of dollars of damages each year. An indispensable tool to mitigate the effect of floods is to provide accurate and timely forecasts. An operational flood forecasting system typically consists of a hydrologic model, converting rainfall data into flood volumes entering the river system, and a hydraulic model, converting these flood volumes into water levels and flood extents. Such a system is prone to various sources of uncertainties from the initial conditions, meteorological forcing, topographic data, model parameters and model structure. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using ground-based streamflow measurements, and such applications are limited to well-gauged areas. The recent increasing availability of spatially distributed remote sensing (RS) data offers new opportunities to improve flood forecasting skill. Based on an Australian case study, this presentation will discuss the use of 1) RS soil moisture to constrain a hydrologic model, and 2) RS flood extent and level to constrain a hydraulic model.The GRKAL hydrological model is calibrated through a joint calibration scheme using both ground-based streamflow and RS soil moisture observations. A lag-aware data assimilation approach is tested through a set of synthetic experiments to integrate RS soil moisture to constrain the streamflow forecasting in real-time.The hydraulic model is LISFLOOD-FP which solves the 2-dimensional inertial approximation of the Shallow Water Equations. Gauged water level time series and RS-derived flood extent and levels are used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space will be discussed.

  13. An SPSSR -Menu for Ordinal Factor Analysis

    Directory of Open Access Journals (Sweden)

    Mario Basto

    2012-01-01

    Full Text Available Exploratory factor analysis is a widely used statistical technique in the social sciences. It attempts to identify underlying factors that explain the pattern of correlations within a set of observed variables. A statistical software package is needed to perform the calculations. However, there are some limitations with popular statistical software packages, like SPSS. The R programming language is a free software package for statistical and graphical computing. It offers many packages written by contributors from all over the world and programming resources that allow it to overcome the dialog limitations of SPSS. This paper offers an SPSS dialog written in theR programming language with the help of some packages, so that researchers with little or no knowledge in programming, or those who are accustomed to making their calculations based on statistical dialogs, have more options when applying factor analysis to their data and hence can adopt a better approach when dealing with ordinal, Likert-type data.

  14. Slow logarithmic relaxation in models with hierarchically constrained dynamics

    OpenAIRE

    Brey, J. J.; Prados, A.

    2000-01-01

    A general kind of models with hierarchically constrained dynamics is shown to exhibit logarithmic anomalous relaxation, similarly to a variety of complex strongly interacting materials. The logarithmic behavior describes most of the decay of the response function.

  15. A chance-constrained stochastic approach to intermodal container routing problems.

    Science.gov (United States)

    Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.

  16. Formal language constrained path problems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.

  17. Applications of a Constrained Mechanics Methodology in Economics

    Science.gov (United States)

    Janova, Jitka

    2011-01-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the…

  18. Vacuum expectation values in a scalar constrained theory

    International Nuclear Information System (INIS)

    Alonso, F.; Julve, J.; Tiemblo, A.

    1985-01-01

    A class of finite Green functions in the context of a scalar constrained theory is studied. In a particular model the one-point GFs show that the vacuum expectation values for some fields vanish while one of them remains finite, a feature exhibited by the Goldstone and Higgs fields respectively. (orig.)

  19. Effective Teaching of Economics: A Constrained Optimization Problem?

    Science.gov (United States)

    Hultberg, Patrik T.; Calonge, David Santandreu

    2017-01-01

    One of the fundamental tenets of economics is that decisions are often the result of optimization problems subject to resource constraints. Consumers optimize utility, subject to constraints imposed by prices and income. As economics faculty, instructors attempt to maximize student learning while being constrained by their own and students'…

  20. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  1. Metal artifact reduction in x-ray computed tomography (CT) by constrained optimization

    International Nuclear Information System (INIS)

    Zhang Xiaomeng; Wang Jing; Xing Lei

    2011-01-01

    Purpose: The streak artifacts caused by metal implants have long been recognized as a problem that limits various applications of CT imaging. In this work, the authors propose an iterative metal artifact reduction algorithm based on constrained optimization. Methods: After the shape and location of metal objects in the image domain is determined automatically by the binary metal identification algorithm and the segmentation of ''metal shadows'' in projection domain is done, constrained optimization is used for image reconstruction. It minimizes a predefined function that reflects a priori knowledge of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available metal-shadow-excluded projection data, with image non-negativity enforced. The minimization problem is solved through the alternation of projection-onto-convex-sets and the steepest gradient descent of the objective function. The constrained optimization algorithm is evaluated with a penalized smoothness objective. Results: The study shows that the proposed method is capable of significantly reducing metal artifacts, suppressing noise, and improving soft-tissue visibility. It outperforms the FBP-type methods and ART and EM methods and yields artifacts-free images. Conclusions: Constrained optimization is an effective way to deal with CT reconstruction with embedded metal objects. Although the method is presented in the context of metal artifacts, it is applicable to general ''missing data'' image reconstruction problems.

  2. Conditions for the Solvability of the Linear Programming Formulation for Constrained Discounted Markov Decision Processes

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Institut de Mathématiques de Bordeaux, INRIA Bordeaux Sud Ouest, Team: CQFD, and IMB (France); Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es [UNED, Department of Statistics and Operations Research (Spain)

    2016-08-15

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  3. Disruptive Event Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-07-21

    This analysis report, ''Disruptive Event Biosphere Dose Conversion Factor Analysis'', is one of the technical reports containing documentation of the ERMYN (Environmental Radiation Model for Yucca Mountain Nevada) biosphere model for the geologic repository at Yucca Mountain, its input parameters, and the application of the model to perform the dose assessment for the repository. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of the two reports that develop biosphere dose conversion factors (BDCFs), which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the conceptual model as well as the mathematical model and lists its input parameters. Model input parameters are developed and described in detail in five analysis report (BSC 2003 [DIRS 160964], BSC 2003 [DIRS 160965], BSC 2003 [DIRS 160976], BSC 2003 [DIRS 161239], and BSC 2003 [DIRS 161241]). The objective of this analysis was to develop the BDCFs for the volcanic ash exposure scenario and the dose factors (DFs) for calculating inhalation doses during volcanic eruption (eruption phase of the volcanic event). The volcanic ash exposure scenario is hereafter referred to as the volcanic ash scenario. For the volcanic ash scenario, the mode of radionuclide release into the biosphere is a volcanic eruption through the repository with the resulting entrainment of contaminated waste in the tephra and the subsequent atmospheric transport and dispersion of contaminated material in

  4. Disruptive Event Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis report, ''Disruptive Event Biosphere Dose Conversion Factor Analysis'', is one of the technical reports containing documentation of the ERMYN (Environmental Radiation Model for Yucca Mountain Nevada) biosphere model for the geologic repository at Yucca Mountain, its input parameters, and the application of the model to perform the dose assessment for the repository. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of the two reports that develop biosphere dose conversion factors (BDCFs), which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the conceptual model as well as the mathematical model and lists its input parameters. Model input parameters are developed and described in detail in five analysis report (BSC 2003 [DIRS 160964], BSC 2003 [DIRS 160965], BSC 2003 [DIRS 160976], BSC 2003 [DIRS 161239], and BSC 2003 [DIRS 161241]). The objective of this analysis was to develop the BDCFs for the volcanic ash exposure scenario and the dose factors (DFs) for calculating inhalation doses during volcanic eruption (eruption phase of the volcanic event). The volcanic ash exposure scenario is hereafter referred to as the volcanic ash scenario. For the volcanic ash scenario, the mode of radionuclide release into the biosphere is a volcanic eruption through the repository with the resulting entrainment of contaminated waste in the tephra and the subsequent atmospheric transport and dispersion of contaminated material in the biosphere. The biosphere process

  5. Constrained minimization in C ++ environment

    International Nuclear Information System (INIS)

    Dymov, S.N.; Kurbatov, V.S.; Silin, I.N.; Yashchenko, S.V.

    1998-01-01

    Based on the ideas, proposed by one of the authors (I.N.Silin), the suitable software was developed for constrained data fitting. Constraints may be of the arbitrary type: equalities and inequalities. The simplest of possible ways was used. Widely known program FUMILI was realized to the C ++ language. Constraints in the form of inequalities φ (θ i ) ≥ a were taken into account by change into equalities φ (θ i ) = t and simple inequalities of type t ≥ a. The equalities were taken into account by means of quadratic penalty functions. The suitable software was tested on the model data of the ANKE setup (COSY accelerator, Forschungszentrum Juelich, Germany)

  6. A replication of a factor analysis of motivations for trapping

    Science.gov (United States)

    Schroeder, Susan; Fulton, David C.

    2015-01-01

    Using a 2013 sample of Minnesota trappers, we employed confirmatory factor analysis to replicate an exploratory factor analysis of trapping motivations conducted by Daigle, Muth, Zwick, and Glass (1998).  We employed the same 25 items used by Daigle et al. and tested the same five-factor structure using a recent sample of Minnesota trappers. We also compared motivations in our sample to those reported by Daigle et el.

  7. A Local Search Modeling for Constrained Optimum Paths Problems (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Quang Dung Pham

    2009-10-01

    Full Text Available Constrained Optimum Path (COP problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. We also apply the modeling to the edge-disjoint paths problem (EDP. We show that side constraints can easily be added in the model. Computational results show the significance of the approach.

  8. A human factor analysis of a radiotherapy accident

    International Nuclear Information System (INIS)

    Thellier, S.

    2009-01-01

    Since September 2005, I.R.S.N. studies activities of radiotherapy treatment from the angle of the human and organizational factors to improve the reliability of treatment in radiotherapy. Experienced in nuclear industry incidents analysis, I.R.S.N. analysed and diffused in March 2008, for the first time in France, the detailed study of a radiotherapy accident from the angle of the human and organizational factors. The method used for analysis is based on interviews and documents kept by the hospital. This analysis aimed at identifying the causes of the difference recorded between the dose prescribed by the radiotherapist and the dose effectively received by the patient. Neither verbal nor written communication (intra-service meetings and protocols of treatment) allowed information to be transmitted correctly in order to permit radiographers to adjust the irradiation zones correctly. This analysis highlighted the fact that during the preparation and the carrying out of the treatment, various factors led planned controls to not be performed. Finally, this analysis highlighted the fact that unsolved areas persist in the report over this accident. This is due to a lack of traceability of a certain number of key actions. The article concluded that there must be improvement in three areas: cooperation between the practitioners, control of the actions and traceability of the actions. (author)

  9. Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems

    National Research Council Canada - National Science Library

    Abramson, Mark A; Audet, Charles; Dennis, Jr, J. E

    2004-01-01

    .... This class combines and extends the Audet-Dennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPS-filter algorithms for general nonlinear constraints...

  10. Capturing Hotspots For Constrained Indoor Movement

    DEFF Research Database (Denmark)

    Ahmed, Tanvir; Pedersen, Torben Bach; Lu, Hua

    2013-01-01

    Finding the hotspots in large indoor spaces is very important for getting overloaded locations, security, crowd management, indoor navigation and guidance. The tracking data coming from indoor tracking are huge in volume and not readily available for finding hotspots. This paper presents a graph......-based model for constrained indoor movement that can map the tracking records into mapping records which represent the entry and exit times of an object in a particular location. Then it discusses the hotspots extraction technique from the mapping records....

  11. CP asymmetries in penguin-dominated, hadronic B{sub d} decays: Constraining new physics at NLO

    Energy Technology Data Exchange (ETDEWEB)

    Vickers, Stefan [Excellence Cluster Universe, TU Muenchen (Germany)

    2012-07-01

    CP asymmetries in penguin-dominated, hadronic B{sub d} decays into CP eigenstates ({pi}, {eta}, {eta}', {phi}, {omega}, {rho})Ks are predicted to be small in the standard model. These observables will be measured in future facilities (Belle II, SuperB) with very high precision and therefore could be used to test CP violating couplings beyond the Standard Model. We investigate such additional contributions for a general class of models in the framework of QCD factorization at next-to-leading order precision. As an example, we demonstrate how these observables can constrain the parameter space of a generic modification of the Z-penguin.

  12. Lagrangian formalism for constrained systems. 2. Gauge symmetries

    International Nuclear Information System (INIS)

    Pyatov, P.N.

    1990-01-01

    Using the Lagrangian formalism for constrained systems all gauge symmetries peculiar for a given Lagrangian system and in establishing the relation between them and the constraints are constructed. Besides, the question about the possible dependence of gauge transformations on accelerations and other higher order time derivatives of coordinates is clarified. 14 refs

  13. Factor Analysis of the Brazilian Version of UPPS Impulsive Behavior Scale

    Science.gov (United States)

    Sediyama, Cristina Y. N.; Moura, Ricardo; Garcia, Marina S.; da Silva, Antonio G.; Soraggi, Carolina; Neves, Fernando S.; Albuquerque, Maicon R.; Whiteside, Setephen P.; Malloy-Diniz, Leandro F.

    2017-01-01

    Objective: To examine the internal consistency and factor structure of the Brazilian adaptation of the UPPS Impulsive Behavior Scale. Methods: UPPS is a self-report scale composed by 40 items assessing four factors of impulsivity: (a) urgency, (b) lack of premeditation; (c) lack of perseverance; (d) sensation seeking. In the present study 384 participants (278 women and 106 men), who were recruited from schools, universities, leisure centers and workplaces fulfilled the UPPS scale. An exploratory factor analysis was performed by using Varimax factor rotation and Kaiser Normalization, and we also conducted two confirmatory analyses to test the independency of the UPPS components found in previous analysis. Results: Results showed a decrease in mean UPPS total scores with age and this analysis showed that the youngest participants (below 30 years) scored significantly higher than the other groups over 30 years. No difference in gender was found. Cronbach’s alpha, results indicated satisfactory values for all subscales, with similar high values for the subscales and confirmatory factor analysis indexes also indicated a poor model fit. The results of two exploratory factor analysis were satisfactory. Conclusion: Our results showed that the Portuguese version has the same four-factor structure of the original and previous translations of the UPPS. PMID:28484414

  14. Factor Analysis of the Brazilian Version of UPPS Impulsive Behavior Scale

    Directory of Open Access Journals (Sweden)

    Leandro F. Malloy-Diniz

    2017-04-01

    Full Text Available Objective: To examine the internal consistency and factor structure of the Brazilian adaptation of the UPPS Impulsive Behavior Scale.Methods: UPPS is a self-report scale composed by 40 items assessing four factors of impulsivity: (a urgency, (b lack of premeditation; (c lack of perseverance; (d sensation seeking. In the present study 384 participants (278 women and 106 men, who were recruited from schools, universities, leisure centers and workplaces fulfilled the UPPS scale. An exploratory factor analysis was performed by using Varimax factor rotation and Kaiser Normalization, and we also conducted two confirmatory analyses to test the independency of the UPPS components found in previous analysis.Results: Results showed a decrease in mean UPPS total scores with age and this analysis showed that the youngest participants (below 30 years scored significantly higher than the other groups over 30 years. No difference in gender was found. Cronbach’s alpha, results indicated satisfactory values for all subscales, with similar high values for the subscales and confirmatory factor analysis indexes also indicated a poor model fit. The results of two exploratory factor analysis were satisfactory.Conclusion: Our results showed that the Portuguese version has the same four-factor structure of the original and previous translations of the UPPS.

  15. Factor Analysis of the Brazilian Version of UPPS Impulsive Behavior Scale.

    Science.gov (United States)

    Sediyama, Cristina Y N; Moura, Ricardo; Garcia, Marina S; da Silva, Antonio G; Soraggi, Carolina; Neves, Fernando S; Albuquerque, Maicon R; Whiteside, Setephen P; Malloy-Diniz, Leandro F

    2017-01-01

    Objective: To examine the internal consistency and factor structure of the Brazilian adaptation of the UPPS Impulsive Behavior Scale. Methods: UPPS is a self-report scale composed by 40 items assessing four factors of impulsivity: (a) urgency, (b) lack of premeditation; (c) lack of perseverance; (d) sensation seeking. In the present study 384 participants (278 women and 106 men), who were recruited from schools, universities, leisure centers and workplaces fulfilled the UPPS scale. An exploratory factor analysis was performed by using Varimax factor rotation and Kaiser Normalization, and we also conducted two confirmatory analyses to test the independency of the UPPS components found in previous analysis. Results: Results showed a decrease in mean UPPS total scores with age and this analysis showed that the youngest participants (below 30 years) scored significantly higher than the other groups over 30 years. No difference in gender was found. Cronbach's alpha, results indicated satisfactory values for all subscales, with similar high values for the subscales and confirmatory factor analysis indexes also indicated a poor model fit. The results of two exploratory factor analysis were satisfactory. Conclusion: Our results showed that the Portuguese version has the same four-factor structure of the original and previous translations of the UPPS.

  16. Factors affecting the HIV/AIDS epidemic: An ecological analysis of ...

    African Journals Online (AJOL)

    Factors affecting the HIV/AIDS epidemic: An ecological analysis of global data. ... Backward multiple linear regression analysis identified the proportion of Muslims, physicians density, and adolescent fertility rate are as the three most prominent factors linked with the national HIV epidemic. Conclusions: The findings support ...

  17. Analysis of Economic Factors Affecting Stock Market

    OpenAIRE

    Xie, Linyin

    2010-01-01

    This dissertation concentrates on analysis of economic factors affecting Chinese stock market through examining relationship between stock market index and economic factors. Six economic variables are examined: industrial production, money supply 1, money supply 2, exchange rate, long-term government bond yield and real estate total value. Stock market comprises fixed interest stocks and equities shares. In this dissertation, stock market is restricted to equity market. The stock price in thi...

  18. Fracture zones constrained by neutral surfaces in a fault-related fold: Insights from the Kelasu tectonic zone, Kuqa Depression

    Science.gov (United States)

    Sun, Shuai; Hou, Guiting; Zheng, Chunfang

    2017-11-01

    Stress variation associated with folding is one of the controlling factors in the development of tectonic fractures, however, little attention has been paid to the influence of neutral surfaces during folding on fracture distribution in a fault-related fold. In this study, we take the Cretaceous Bashijiqike Formation in the Kuqa Depression as an example and analyze the distribution of tectonic fractures in fault-related folds by core observation and logging data analysis. Three fracture zones are identified in a fault-related fold: a tensile zone, a transition zone and a compressive zone, which may be constrained by two neutral surfaces of fold. Well correlation reveals that the tensile zone and the transition zone reach the maximum thickness at the fold hinge and get thinner in the fold limbs. A 2D viscoelastic stress field model of a fault-related fold was constructed to further investigate the mechanism of fracturing. Statistical and numerical analysis reveal that the tensile zone and the transition zone become thicker with decreasing interlimb angle. Stress variation associated with folding is the first level of control over the general pattern of fracture distribution while faulting is a secondary control over the development of local fractures in a fault-related fold.

  19. Deterministic factor analysis: methods of integro-differentiation of non-integral order

    Directory of Open Access Journals (Sweden)

    Valentina V. Tarasova

    2016-12-01

    Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes

  20. Solution for state constrained optimal control problems applied to power split control for hybrid vehicles

    NARCIS (Netherlands)

    Keulen, van T.A.C.; Gillot, J.; Jager, de A.G.; Steinbuch, M.

    2014-01-01

    This paper presents a numerical solution for scalar state constrained optimal control problems. The algorithm rewrites the constrained optimal control problem as a sequence of unconstrained optimal control problems which can be solved recursively as a two point boundary value problem. The solution

  1. Modification and analysis of engineering hot spot factor of HFETR

    International Nuclear Information System (INIS)

    Hu Yuechun; Deng Caiyu; Li Haitao; Xu Taozhong; Mo Zhengyu

    2014-01-01

    This paper presents the modification and analysis of engineering hot spot factors of HFETR. The new factors are applied in the fuel temperature analysis and the estimated value of the safety allowable operating power of HFETR. The result shows the maximum cladding temperature of the fuel is lower when the new factor are in utilization, and the safety allowable operating power of HFETR if higher, thus providing the economical efficiency of HFETR. (authors)

  2. Comparative Analysis of Uninhibited and Constrained Avian Wing Aerodynamics

    Science.gov (United States)

    Cox, Jordan A.

    The flight of birds has intrigued and motivated man for many years. Bird flight served as the primary inspiration of flying machines developed by Leonardo Da Vinci, Otto Lilienthal, and even the Wright brothers. Avian flight has once again drawn the attention of the scientific community as unmanned aerial vehicles (UAV) are not only becoming more popular, but smaller. Birds are once again influencing the designs of aircraft. Small UAVs operating within flight conditions and low Reynolds numbers common to birds are not yet capable of the high levels of control and agility that birds display with ease. Many researchers believe the potential to improve small UAV performance can be obtained by applying features common to birds such as feathers and flapping flight to small UAVs. Although the effects of feathers on a wing have received some attention, the effects of localized transient feather motion and surface geometry on the flight performance of a wing have been largely overlooked. In this research, the effects of freely moving feathers on a preserved red tailed hawk wing were studied. A series of experiments were conducted to measure the aerodynamic forces on a hawk wing with varying levels of feather movement permitted. Angle of attack and air speed were varied within the natural flight envelope of the hawk. Subsequent identical tests were performed with the feather motion constrained through the use of externally-applied surface treatments. Additional tests involved the study of an absolutely fixed geometry mold-and-cast wing model of the original bird wing. Final tests were also performed after applying surface coatings to the cast wing. High speed videos taken during tests revealed the extent of the feather movement between wing models. Images of the microscopic surface structure of each wing model were analyzed to establish variations in surface geometry between models. Recorded aerodynamic forces were then compared to the known feather motion and surface

  3. Ranking insurance firms using AHP and Factor Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Khodaei Valahzaghard

    2013-03-01

    Full Text Available Insurance industry includes a significant part of economy and it is important to learn more about the capabilities of different firms, which are active in this industry. In this paper, we present an empirical study to rank the insurance firms using analytical hierarchy process as well as factor analysis. The study considers four criteria including capital adequacy, quality of earning, quality of cash flow and quality of firms’ assets. The results of the implementation of factor analysis (FA have been verified using Kaiser-Meyer-Olkin (KMO=0.573 and Bartlett's Chi-Square (443.267 P-value=0.000 tests. According to the results FA, the first important factor, capital adequacy, represents 21.557% of total variance, the second factor, quality of income, represents 20.958% of total variance. In addition, the third factor, quality of cash flow, represents 19.417% of total variance and the last factor, quality of assets, represents 18.641% of total variance. The study has also used analytical hierarchy process (AHP to rank insurance firms. The results of our survey indicate that capital adequacy (0.559 is accounted as the most important factor followed by quality of income (0.235, quality of cash flow (0.144 and quality of assets (0.061. The results of AHP are consistent with the results of FA, which somewhat validates the overall study.

  4. Nonmonotonic Skeptical Consequence Relation in Constrained Default Logic

    Directory of Open Access Journals (Sweden)

    Mihaiela Lupea

    2010-12-01

    Full Text Available This paper presents a study of the nonmonotonic consequence relation which models the skeptical reasoning formalised by constrained default logic. The nonmonotonic skeptical consequence relation is defined using the sequent calculus axiomatic system. We study the formal properties desirable for a good nonmonotonic relation: supraclassicality, cut, cautious monotony, cumulativity, absorption, distribution. 

  5. Optimal Power Constrained Distributed Detection over a Noisy Multiaccess Channel

    Directory of Open Access Journals (Sweden)

    Zhiwen Hu

    2015-01-01

    Full Text Available The problem of optimal power constrained distributed detection over a noisy multiaccess channel (MAC is addressed. Under local power constraints, we define the transformation function for sensor to realize the mapping from local decision to transmitted waveform. The deflection coefficient maximization (DCM is used to optimize the performance of power constrained fusion system. Using optimality conditions, we derive the closed-form solution to the considered problem. Monte Carlo simulations are carried out to evaluate the performance of the proposed new method. Simulation results show that the proposed method could significantly improve the detection performance of the fusion system with low signal-to-noise ratio (SNR. We also show that the proposed new method has a robust detection performance for broad SNR region.

  6. Antifungal susceptibility testing method for resource constrained laboratories

    Directory of Open Access Journals (Sweden)

    Khan S

    2006-01-01

    Full Text Available Purpose: In resource-constrained laboratories of developing countries determination of antifungal susceptibility testing by NCCLS/CLSI method is not always feasible. We describe herein a simple yet comparable method for antifungal susceptibility testing. Methods: Reference MICs of 72 fungal isolates including two quality control strains were determined by NCCLS/CLSI methods against fluconazole, itraconazole, voriconazole, amphotericin B and cancidas. Dermatophytes were also tested against terbinafine. Subsequently, on selection of optimum conditions, MIC was determined for all the fungal isolates by semisolid antifungal agar susceptibility method in Brain heart infusion broth supplemented with 0.5% agar (BHIA without oil overlay and results were compared with those obtained by reference NCCLS/CLSI methods. Results: Comparable results were obtained by NCCLS/CLSI and semisolid agar susceptibility (SAAS methods against quality control strains. MICs for 72 isolates did not differ by more than one dilution for all drugs by SAAS. Conclusions: SAAS using BHIA without oil overlay provides a simple and reproducible method for obtaining MICs against yeast, filamentous fungi and dermatophytes in resource-constrained laboratories.

  7. Toward Cognitively Constrained Models of Language Processing: A Review

    Directory of Open Access Journals (Sweden)

    Margreet Vogelzang

    2017-09-01

    Full Text Available Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained computational models, which simulate the cognitive processes involved in language processing. The theoretical claims implemented in cognitive models interact with general architectural constraints such as memory limitations. This way, it generates new predictions that can be tested in experiments, thus generating new data that can give rise to new theoretical insights. This theory-model-experiment cycle is a promising method for investigating aspects of language processing that are difficult to investigate with more traditional experimental techniques. This review specifically examines the language processing models of Lewis and Vasishth (2005, Reitter et al. (2011, and Van Rij et al. (2010, all implemented in the cognitive architecture Adaptive Control of Thought—Rational (Anderson et al., 2004. These models are all limited by the assumptions about cognitive capacities provided by the cognitive architecture, but use different linguistic approaches. Because of this, their comparison provides insight into the extent to which assumptions about general cognitive resources influence concretely implemented models of linguistic competence. For example, the sheer speed and accuracy of human language processing is a current challenge in the field of cognitive modeling, as it does not seem to adhere to the same memory and processing capacities that have been found in other cognitive processes. Architecture-based cognitive models of language processing may be able to make explicit which language-specific resources are needed to acquire and process natural language. The review sheds light on cognitively constrained models of language processing from two angles: we

  8. Hierarchical Factoring Based On Image Analysis And Orthoblique Rotations.

    Science.gov (United States)

    Stankov, L

    1979-07-01

    The procedure for hierarchical factoring suggested by Schmid and Leiman (1957) is applied within the framework of image analysis and orthoblique rotational procedures. It is shown that this approach necessarily leads to correlated higher order factors. Also, one can obtain a smaller number of factors than produced by typical hierarchical procedures.

  9. Toward cognitively constrained models of language processing : A review

    NARCIS (Netherlands)

    Vogelzang, Margreet; Mills, Anne C.; Reitter, David; van Rij, Jacolien; Hendriks, Petra; van Rijn, Hedderik

    2017-01-01

    Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained

  10. Automated Precision Maneuvering and Landing in Extreme and Constrained Environments

    Data.gov (United States)

    National Aeronautics and Space Administration — Autonomous, precise maneuvering and landing in extreme and constrained environments is a key enabler for future NASA missions. Missions to map the interior of a...

  11. Quantization of soluble classical constrained systems

    International Nuclear Information System (INIS)

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-01-01

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way

  12. Quantization of soluble classical constrained systems

    Energy Technology Data Exchange (ETDEWEB)

    Belhadi, Z. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Laboratoire de physique théorique, Faculté des sciences exactes, Université de Bejaia, 06000 Bejaia (Algeria); Menas, F. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Ecole Nationale Préparatoire aux Etudes d’ingéniorat, Laboratoire de physique, RN 5 Rouiba, Alger (Algeria); Bérard, A. [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France); Mohrbach, H., E-mail: herve.mohrbach@univ-lorraine.fr [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France)

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  13. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  14. Item-level factor analysis of the Self-Efficacy Scale.

    Science.gov (United States)

    Bunketorp Käll, Lina

    2014-03-01

    This study explores the internal structure of the Self-Efficacy Scale (SES) using item response analysis. The SES was previously translated into Swedish and modified to encompass all types of pain, not exclusively back pain. Data on perceived self-efficacy in 47 patients with subacute whiplash-associated disorders were derived from a previously conducted randomized-controlled trial. The item-level factor analysis was carried out using a six-step procedure. To further study the item inter-relationships and to determine the underlying structure empirically, the 20 items of the SES were also subjected to principal component analysis with varimax rotation. The analyses showed two underlying factors, named 'social activities' and 'physical activities', with seven items loading on each factor. The remaining six items of the SES appeared to measure somewhat different constructs and need to be analysed further.

  15. Constraint-Based Local Search for Constrained Optimum Paths Problems

    Science.gov (United States)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  16. Identification of associations between genotypes and longitudinal phenotypes via temporally-constrained group sparse canonical correlation analysis.

    Science.gov (United States)

    Hao, Xiaoke; Li, Chanxiu; Yan, Jingwen; Yao, Xiaohui; Risacher, Shannon L; Saykin, Andrew J; Shen, Li; Zhang, Daoqiang

    2017-07-15

    Neuroimaging genetics identifies the relationships between genetic variants (i.e., the single nucleotide polymorphisms) and brain imaging data to reveal the associations from genotypes to phenotypes. So far, most existing machine-learning approaches are widely used to detect the effective associations between genetic variants and brain imaging data at one time-point. However, those associations are based on static phenotypes and ignore the temporal dynamics of the phenotypical changes. The phenotypes across multiple time-points may exhibit temporal patterns that can be used to facilitate the understanding of the degenerative process. In this article, we propose a novel temporally constrained group sparse canonical correlation analysis (TGSCCA) framework to identify genetic associations with longitudinal phenotypic markers. The proposed TGSCCA method is able to capture the temporal changes in brain from longitudinal phenotypes by incorporating the fused penalty, which requires that the differences between two consecutive canonical weight vectors from adjacent time-points should be small. A new efficient optimization algorithm is designed to solve the objective function. Furthermore, we demonstrate the effectiveness of our algorithm on both synthetic and real data (i.e., the Alzheimer's Disease Neuroimaging Initiative cohort, including progressive mild cognitive impairment, stable MCI and Normal Control participants). In comparison with conventional SCCA, our proposed method can achieve strong associations and discover phenotypic biomarkers across multiple time-points to guide disease-progressive interpretation. The Matlab code is available at https://sourceforge.net/projects/ibrain-cn/files/ . dqzhang@nuaa.edu.cn or shenli@iu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  17. Optimization of an implicit constrained multi-physics system for motor wheels of electric vehicle

    International Nuclear Information System (INIS)

    Lei, Fei; Du, Bin; Liu, Xin; Xie, Xiaoping; Chai, Tian

    2016-01-01

    In this paper, implicit constrained multi-physics model of a motor wheel for an electric vehicle is built and then optimized. A novel optimization approach is proposed to solve the compliance problem between implicit constraints and stochastic global optimization. Firstly, multi-physics model of motor wheel is built from the theories of structural mechanics, electromagnetism and thermal physics. Then, implicit constraints are applied from the vehicle performances and magnetic characteristics. Implicit constrained optimization is carried out by a series of unconstrained optimization and verifications. In practice, sequentially updated subspaces are designed to completely substitute the original design space in local areas. In each subspace, a solution is obtained and is then verified by the implicit constraints. Optimal solutions which satisfy the implicit constraints are accepted as final candidates. The final global optimal solution is optimized from those candidates. Discussions are carried out to discover the differences between optimal solutions with unconstrained problem and different implicit constrained problems. Results show that the implicit constraints have significant influences on the optimal solution and the proposed approach is effective in finding the optimals. - Highlights: • An implicit constrained multi-physics model is built for sizing a motor wheel. • Vehicle dynamic performances are applied as implicit constraints for nonlinear system. • An efficient novel optimization is proposed to explore the constrained design space. • The motor wheel is optimized to achieve maximum efficiency on vehicle dynamics. • Influences of implicit constraints on vehicle performances are compared and analyzed.

  18. Evaluation of Parallel Analysis Methods for Determining the Number of Factors

    Science.gov (United States)

    Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.

    2010-01-01

    Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…

  19. Assimilation of repeated woody biomass observations constrains decadal ecosystem carbon cycle uncertainty in aggrading forests

    Science.gov (United States)

    Smallman, T. L.; Exbrayat, J.-F.; Mencuccini, M.; Bloom, A. A.; Williams, M.

    2017-03-01

    Forest carbon sink strengths are governed by plant growth, mineralization of dead organic matter, and disturbance. Across landscapes, remote sensing can provide information about aboveground states of forests and this information can be linked to models to estimate carbon cycling in forests close to steady state. For aggrading forests this approach is more challenging and has not been demonstrated. Here we apply a Bayesian approach, linking a simple model to a range of data, to evaluate their information content, for two aggrading forests. We compare high information content analyses using local observations with retrievals using progressively sparser remotely sensed information (repeated, single, and no woody biomass observations). The net biome productivity of both forests is constrained to be a net sink with litter dynamics at one forest, while at the second forest total dead organic matter estimates are within observational uncertainty. The uncertainty of retrieved ecosystem traits in the repeated biomass analysis is reduced by up to 50% compared to analyses with less biomass information. This study quantifies the importance of repeated woody observations in constraining the dynamics of both wood and dead organic matter, highlighting the benefit of proposed remote sensing missions.

  20. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  1. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  2. On the notion of Jacobi fields in constrained calculus of variations

    Directory of Open Access Journals (Sweden)

    Massa Enrico

    2016-12-01

    Full Text Available In variational calculus, the minimality of a given functional under arbitrary deformations with fixed end-points is established through an analysis of the so called second variation. In this paper, the argument is examined in the context of constrained variational calculus, assuming piecewise differentiable extremals, commonly referred to as extremaloids. The approach relies on the existence of a fully covariant representation of the second variation of the action functional, based on a family of local gauge transformations of the original Lagrangian and on a set of scalar attributes of the extremaloid, called the corners' strengths [16]. In dis- cussing the positivity of the second variation, a relevant role is played by the Jacobi fields, defined as infinitesimal generators of 1-parameter groups of diffeomorphisms preserving the extremaloids. Along a piecewise differentiable extremal, these fields are generally discontinuous across the corners. A thorough analysis of this point is presented. An alternative characterization of the Jacobi fields as solutions of a suitable accessory variational problem is established.

  3. Constrained Quantum Mechanics: Chaos in Non-Planar Billiards

    Science.gov (United States)

    Salazar, R.; Tellez, G.

    2012-01-01

    We illustrate some of the techniques to identify chaos signatures at the quantum level using as guiding examples some systems where a particle is constrained to move on a radial symmetric, but non-planar, surface. In particular, two systems are studied: the case of a cone with an arbitrary contour or "dunce hat billiard" and the rectangular…

  4. Constraining continuous rainfall simulations for derived design flood estimation

    Science.gov (United States)

    Woldemeskel, F. M.; Sharma, A.; Mehrotra, R.; Westra, S.

    2016-11-01

    Stochastic rainfall generation is important for a range of hydrologic and water resources applications. Stochastic rainfall can be generated using a number of models; however, preserving relevant attributes of the observed rainfall-including rainfall occurrence, variability and the magnitude of extremes-continues to be difficult. This paper develops an approach to constrain stochastically generated rainfall with an aim of preserving the intensity-durationfrequency (IFD) relationships of the observed data. Two main steps are involved. First, the generated annual maximum rainfall is corrected recursively by matching the generated intensity-frequency relationships to the target (observed) relationships. Second, the remaining (non-annual maximum) rainfall is rescaled such that the mass balance of the generated rain before and after scaling is maintained. The recursive correction is performed at selected storm durations to minimise the dependence between annual maximum values of higher and lower durations for the same year. This ensures that the resulting sequences remain true to the observed rainfall as well as represent the design extremes that may have been developed separately and are needed for compliance reasons. The method is tested on simulated 6 min rainfall series across five Australian stations with different climatic characteristics. The results suggest that the annual maximum and the IFD relationships are well reproduced after constraining the simulated rainfall. While our presentation focusses on the representation of design rainfall attributes (IFDs), the proposed approach can also be easily extended to constrain other attributes of the generated rainfall, providing an effective platform for post-processing of stochastic rainfall generators.

  5. Disruptive Event Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-08

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2004 [DIRS 169671]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis''. The objective of this

  6. Disruptive Event Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2004 [DIRS 169671]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis''. The objective of this analysis was to develop the BDCFs for the volcanic ash

  7. An inter-battery factor analysis of the comrey personality scales and the 16 personality factor questionnaire

    OpenAIRE

    Gideon P. de Bruin

    2000-01-01

    The scores of 700 Afrikaans-speaking university students on the Comrey Personality Scales and the 16 Personality Factor Questionnaire were subjected to an inter-battery factor analysis. This technique uses only the correlations between two sets of variables and reveals only the factors that they have in common. Three of the Big Five personality factors were revealed, namely Extroversion, Neuroticism and Conscientiousness. However, the Conscientiousness factor contained a relatively strong uns...

  8. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming

    2017-05-18

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.

  9. Cement-in-cement acetabular revision with a constrained tripolar component.

    Science.gov (United States)

    Leonidou, Andreas; Pagkalos, Joseph; Luscombe, Jonathan

    2012-02-17

    Dislocation of a total hip replacement (THR) is common following total hip arthroplasty (THA). When nonoperative management fails to maintain reduction, revision surgery is considered. The use of constrained acetabular liners has been extensively described. Complete removal of the old cement mantle during revision THA can be challenging and is associated with significant complications. Cement-in-cement revision is an established technique. However, the available clinical and experimental studies focus on femoral stem revision. The purpose of this study was to present a case of cement-in-cement acetabular revision with a constrained component for recurrent dislocations and to investigate the current best evidence for this technique. This article describes the case of a 74-year-old woman who underwent revision of a Charnley THR for recurrent low-energy dislocations. A tripolar constrained acetabular component was cemented over the primary cement mantle following removal of the original liner by reaming, roughening the surface, and thoroughly irrigating and drying the primary cement. Clinical and radiological results were good, with the Oxford Hip Score improving from 11 preoperatively to 24 at 6 months postoperatively. The good short-term results of this case and the current clinical and biomechanical data encourage the use of the cement-in-cement technique for acetabular revision. Careful irrigation, drying, and roughening of the primary surface are necessary. Copyright 2012, SLACK Incorporated.

  10. Constraining neutron star matter with Quantum Chromodynamics

    CERN Document Server

    Kurkela, Aleksi; Schaffner-Bielich, Jurgen; Vuorinen, Aleksi

    2014-01-01

    In recent years, there have been several successful attempts to constrain the equation of state of neutron star matter using input from low-energy nuclear physics and observational data. We demonstrate that significant further restrictions can be placed by additionally requiring the pressure to approach that of deconfined quark matter at high densities. Remarkably, the new constraints turn out to be highly insensitive to the amount --- or even presence --- of quark matter inside the stars.

  11. Quantitative EDXS analysis of organic materials using the ζ-factor method

    International Nuclear Information System (INIS)

    Fladischer, Stefanie; Grogger, Werner

    2014-01-01

    In this study we successfully applied the ζ-factor method to perform quantitative X-ray analysis of organic thin films consisting of light elements. With its ability to intrinsically correct for X-ray absorption, this method significantly improved the quality of the quantification as well as the accuracy of the results compared to conventional techniques in particular regarding the quantification of light elements. We describe in detail the process of determining sensitivity factors (ζ-factors) using a single standard specimen and the involved parameter optimization for the estimation of ζ-factors for elements not contained in the standard. The ζ-factor method was then applied to perform quantitative analysis of organic semiconducting materials frequently used in organic electronics. Finally, the results were verified and discussed concerning validity and accuracy. - Highlights: • The ζ-factor method is used for quantitative EDXS analysis of light elements. • We describe the process of determining ζ-factors from a single standard in detail. • Organic semiconducting materials are successfully quantified

  12. Invariant set computation for constrained uncertain discrete-time systems

    NARCIS (Netherlands)

    Athanasopoulos, N.; Bitsoris, G.

    2010-01-01

    In this article a novel approach to the determination of polytopic invariant sets for constrained discrete-time linear uncertain systems is presented. First, the problem of stabilizing a prespecified initial condition set in the presence of input and state constraints is addressed. Second, the

  13. HotpathVM: An Effective JIT for Resource-constrained Devices

    DEFF Research Database (Denmark)

    Gal, Andreas; Franz, Michael; Probst, Christian

    2006-01-01

    We present a just-in-time compiler for a Java VM that is small enough to fit on resource-constrained devices, yet surprisingly effective. Our system dynamically identifies traces of frequently executed bytecode instructions (which may span several basic blocks across several methods) and compiles...

  14. Simulating the Range Expansion of Spartina alterniflora in Ecological Engineering through Constrained Cellular Automata Model and GIS

    Directory of Open Access Journals (Sweden)

    Zongsheng Zheng

    2015-01-01

    Full Text Available Environmental factors play an important role in the range expansion of Spartina alterniflora in estuarine salt marshes. CA models focusing on neighbor effect often failed to account for the influence of environmental factors. This paper proposed a CCA model that enhanced CA model by integrating constrain factors of tidal elevation, vegetation density, vegetation classification, and tidal channels in Chongming Dongtan wetland, China. Meanwhile, a positive feedback loop between vegetation and sedimentation was also considered in CCA model through altering the tidal accretion rate in different vegetation communities. After being validated and calibrated, the CCA model is more accurate than the CA model only taking account of neighbor effect. By overlaying remote sensing classification and the simulation results, the average accuracy increases to 80.75% comparing with the previous CA model. Through the scenarios simulation, the future of Spartina alterniflora expansion was analyzed. CCA model provides a new technical idea and method for salt marsh species expansion and control strategies research.

  15. Bayesian Sensitivity Analysis of a Nonlinear Dynamic Factor Analysis Model with Nonparametric Prior and Possible Nonignorable Missingness.

    Science.gov (United States)

    Tang, Niansheng; Chow, Sy-Miin; Ibrahim, Joseph G; Zhu, Hongtu

    2017-12-01

    Many psychological concepts are unobserved and usually represented as latent factors apprehended through multiple observed indicators. When multiple-subject multivariate time series data are available, dynamic factor analysis models with random effects offer one way of modeling patterns of within- and between-person variations by combining factor analysis and time series analysis at the factor level. Using the Dirichlet process (DP) as a nonparametric prior for individual-specific time series parameters further allows the distributional forms of these parameters to deviate from commonly imposed (e.g., normal or other symmetric) functional forms, arising as a result of these parameters' restricted ranges. Given the complexity of such models, a thorough sensitivity analysis is critical but computationally prohibitive. We propose a Bayesian local influence method that allows for simultaneous sensitivity analysis of multiple modeling components within a single fitting of the model of choice. Five illustrations and an empirical example are provided to demonstrate the utility of the proposed approach in facilitating the detection of outlying cases and common sources of misspecification in dynamic factor analysis models, as well as identification of modeling components that are sensitive to changes in the DP prior specification.

  16. Multiple timescale analysis and factor analysis of energy ecological footprint growth in China 1953-2006

    International Nuclear Information System (INIS)

    Chen Chengzhong; Lin Zhenshan

    2008-01-01

    Scientific analysis of energy consumption and its influencing factors is of great importance for energy strategy and policy planning. The energy consumption in China 1953-2006 is estimated by applying the energy ecological footprint (EEF) method, and the fluctuation periods of annual China's per capita EEF (EEF cpc ) growth rate are analyzed with the empirical mode decomposition (EMD) method in this paper. EEF intensity is analyzed to depict energy efficiency in China. The main timescales of the 37 factors that affect the annual growth rate of EEF cpc are also discussed based on EMD and factor analysis methods. Results show three obvious undulation cycles of the annual growth rate of EEF cpc , i.e., 4.6, 14.4 and 34.2 years over the last 53 years. The analysis findings from the common synthesized factors of IMF1, IMF2 and IMF3 timescales of the 37 factors suggest that China's energy policy-makers should attach more importance to stabilizing economic growth, optimizing industrial structure, regulating domestic petroleum exploitation and improving transportation efficiency

  17. Constrained variability of modeled T:ET ratio across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-07-01

    A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

  18. Analysis of related risk factors for pancreatic fistula after pancreaticoduodenectomy

    Directory of Open Access Journals (Sweden)

    Qi-Song Yu

    2016-08-01

    Full Text Available Objective: To explore the related risk factors for pancreatic fistula after pancreaticoduodenectomy to provide a theoretical evidence for effectively preventing the occurrence of pancreatic fistula. Methods: A total of 100 patients who were admitted in our hospital from January, 2012 to January, 2015 and had performed pancreaticoduodenectomy were included in the study. The related risk factors for developing pancreatic fistula were collected for single factor and Logistic multi-factor analysis. Results: Among the included patients, 16 had pancreatic fistula, and the total occurrence rate was 16% (16/100. The single-factor analysis showed that the upper abdominal operation history, preoperative bilirubin, pancreatic texture, pancreatic duct diameter, intraoperative amount of bleeding, postoperative hemoglobin, and application of somatostatin after operation were the risk factors for developing pancreatic fistula (P<0.05. The multi-factor analysis showed that the upper abdominal operation history, the soft pancreatic texture, small pancreatic duct diameter, and low postoperative hemoglobin were the dependent risk factors for developing pancreatic fistula (OR=4.162, 6.104, 5.613, 4.034, P<0.05. Conclusions: The occurrence of pancreatic fistula after pancreaticoduodenectomy is closely associated with the upper abdominal operation history, the soft pancreatic texture, small pancreatic duct diameter, and low postoperative hemoglobin; therefore, effective measures should be taken to reduce the occurrence of pancreatic fistula according to the patients’ own conditions.

  19. Transient thermal stresses in a circular cylinder with constrained ends

    International Nuclear Information System (INIS)

    Goshima, Takahito; Miyao, Kaju

    1986-01-01

    This paker deals with the transient thermal stresses in a finite circular cylinder constrained at both end surfaces and subjected to axisymmetric temperature distribution on the lateral surface. The thermoelastic problem is formulated in terms of a thermoelastic displacement potential and three harmonic stress functions. Numerical calculations are carried out for the case of the uniform temperature distribution on the lateral surface. The stress distributions on the constrained end and the free suface are shown graphically, and the singularity in stresses appearing at the circumferencial edge is considered. Moreover, the approximate solution based upon the plane strain theory is introduced in order to compare the rigorous one, and it is considered how the length of the cylinder and the time proceeds affect on the accuracy of the approximation. (author)

  20. On meeting capital requirements with a chance-constrained optimization model.

    Science.gov (United States)

    Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan

    2016-01-01

    This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.

  1. Constrained variational calculus for higher order classical field theories

    Energy Technology Data Exchange (ETDEWEB)

    Campos, Cedric M; De Leon, Manuel; De Diego, David MartIn, E-mail: cedricmc@icmat.e, E-mail: mdeleon@icmat.e, E-mail: david.martin@icmat.e [Instituto de Ciencias Matematicas, CSIC-UAM-UC3M-UCM, Serrano 123, 28006 Madrid (Spain)

    2010-11-12

    We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.

  2. Constrained variational calculus for higher order classical field theories

    International Nuclear Information System (INIS)

    Campos, Cedric M; De Leon, Manuel; De Diego, David MartIn

    2010-01-01

    We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.

  3. Chance-constrained optimization of demand response to price signals

    DEFF Research Database (Denmark)

    Dorini, Gianluca Fabio; Pinson, Pierre; Madsen, Henrik

    2013-01-01

    within a recursive least squares (RLS) framework using data measurable at the grid level, in an adaptive fashion. Optimal price signals are generated by embedding the FIR models within a chance-constrained optimization framework. The objective is to keep the price signal as unchanged as possible from...

  4. Evaluation of constrained mobility for programmability in network management

    NARCIS (Netherlands)

    Bohoris, C.; Liotta, A.; Pavlou, G.; Ambler, A.P.; Calo, S.B.; Kar, G.

    2000-01-01

    In recent years, a significant amount of research work has addressed the use of code mobility in network management. In this paper, we introduce first three aspects of code mobility and argue that constrained mobility offers a natural and easy approach to network management programmability. While

  5. Two Expectation-Maximization Algorithms for Boolean Factor Analysis

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    2014-01-01

    Roč. 130, 23 April (2014), s. 83-97 ISSN 0925-2312 R&D Projects: GA ČR GAP202/10/0262 Grant - others:GA MŠk(CZ) ED1.1.00/02.0070; GA MŠk(CZ) EE.2.3.20.0073 Program:ED Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean Factor analysis * Binary Matrix factorization * Neural networks * Binary data model * Dimension reduction * Bars problem Subject RIV: IN - Informatics, Computer Science Impact factor: 2.083, year: 2014

  6. Mesh dependence in PDE-constrained optimisation an application in tidal turbine array layouts

    CERN Document Server

    Schwedes, Tobias; Funke, Simon W; Piggott, Matthew D

    2017-01-01

    This book provides an introduction to PDE-constrained optimisation using finite elements and the adjoint approach. The practical impact of the mathematical insights presented here are demonstrated using the realistic scenario of the optimal placement of marine power turbines, thereby illustrating the real-world relevance of best-practice Hilbert space aware approaches to PDE-constrained optimisation problems. Many optimisation problems that arise in a real-world context are constrained by partial differential equations (PDEs). That is, the system whose configuration is to be optimised follows physical laws given by PDEs. This book describes general Hilbert space formulations of optimisation algorithms, thereby facilitating optimisations whose controls are functions of space. It demonstrates the importance of methods that respect the Hilbert space structure of the problem by analysing the mathematical drawbacks of failing to do so. The approaches considered are illustrated using the optimisation problem arisin...

  7. Factor analysis of symptom profile in early onset and late onset OCD.

    Science.gov (United States)

    Grover, Sandeep; Sarkar, Siddharth; Gupta, Gourav; Kate, Natasha; Ghosh, Abhishek; Chakrabarti, Subho; Avasthi, Ajit

    2018-04-01

    This study aimed to assess the factor structure of early and late onset OCD. Additionally, cluster analysis was conducted in the same sample to assess the applicability of the factors. 345 participants were assessed with Yale Brown Obsessive Compulsive Scale symptom checklist. Patients were classified as early onset (onset of symptoms at age ≤ 18 years) and late onset (onset at age > 18 years) OCD depending upon the age of onset of the symptoms. Factor analysis and cluster analysis of early-onset and late-onset OCD was conducted. The study sample comprised of 91 early onset and 245 late onset OCD subjects. Males were more common in the early onset group. Differences in the frequency of phenomenology related to contamination related, checking, repeating, counting and ordering/arranging compulsions were present across the early and late onset groups. Factor analysis of YBOCS revealed a 3 factor solution for both the groups, which largely concurred with each other. These factors were named as hoarding and symmetry (factor-1), contamination (factor-2) and aggressive, sexual and religious factor (factor-3). To conclude this study shows that factor structure of symptoms of OCD seems to be similar between early-onset and late-onset OCD. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Tensor-Dictionary Learning with Deep Kruskal-Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew J.; Pu, Yunchen; Sun, Yannan; Spell, Gregory; Carin, Lawrence

    2017-04-20

    We introduce new dictionary learning methods for tensor-variate data of any order. We represent each data item as a sum of Kruskal decomposed dictionary atoms within the framework of beta-process factor analysis (BPFA). Our model is nonparametric and can infer the tensor-rank of each dictionary atom. This Kruskal-Factor Analysis (KFA) is a natural generalization of BPFA. We also extend KFA to a deep convolutional setting and develop online learning methods. We test our approach on image processing and classification tasks achieving state of the art results for 2D & 3D inpainting and Caltech 101. The experiments also show that atom-rank impacts both overcompleteness and sparsity.

  9. Multivariate factor analysis of Girgentana goat milk composition

    Directory of Open Access Journals (Sweden)

    Pietro Giaccone

    2010-01-01

    Full Text Available The interpretation of the several variables that contribute to defining milk quality is difficult due to the high degree of  correlation among them. In this case, one of the best methods of statistical processing is factor analysis, which belongs  to the multivariate groups; for our study this particular statistical approach was employed.  A total of 1485 individual goat milk samples from 117 Girgentana goats, were collected fortnightly from January to July,  and analysed for physical and chemical composition, and clotting properties. Milk pH and tritable acidity were within the  normal range for fresh goat milk. Morning milk yield resulted 704 ± 323 g with 3.93 ± 1.23% and 3.48±0.38% for fat  and protein percentages, respectively. The milk urea content was 43.70 ± 8.28 mg/dl. The clotting ability of Girgentana  milk was quite good, with a renneting time equal to 16.96 ± 3.08 minutes, a rate of curd formation of 2.01 ± 1.63 min-  utes and a curd firmness of 25.08 ± 7.67 millimetres.  Factor analysis was performed by applying axis orthogonal rotation (rotation type VARIMAX; the analysis grouped the  milk components into three latent or common factors. The first, which explained 51.2% of the total covariance, was  defined as “slow milks”, because it was linked to r and pH. The second latent factor, which explained 36.2% of the total  covariance, was defined as “milk yield”, because it is positively correlated to the morning milk yield and to the urea con-  tent, whilst negatively correlated to the fat percentage. The third latent factor, which explained 12.6% of the total covari-  ance, was defined as “curd firmness,” because it is linked to protein percentage, a30 and titatrable acidity. With the aim  of evaluating the influence of environmental effects (stage of kidding, parity and type of kidding, factor scores were anal-  ysed with the mixed linear model. Results showed significant effects of the season of

  10. Confirmatory factor analysis using Microsoft Excel.

    Science.gov (United States)

    Miles, Jeremy N V

    2005-11-01

    This article presents a method for using Microsoft (MS) Excel for confirmatory factor analysis (CFA). CFA is often seen as an impenetrable technique, and thus, when it is taught, there is frequently little explanation of the mechanisms or underlying calculations. The aim of this article is to demonstrate that this is not the case; it is relatively straightforward to produce a spreadsheet in MS Excel that can carry out simple CFA. It is possible, with few or no programming skills, to effectively program a CFA analysis and, thus, to gain insight into the workings of the procedure.

  11. Constrained choices? Linking employees' and spouses' work time to health behaviors.

    Science.gov (United States)

    Fan, Wen; Lam, Jack; Moen, Phyllis; Kelly, Erin; King, Rosalind; McHale, Susan

    2015-02-01

    There are extensive literatures on work conditions and health and on family contexts and health, but less research asking how a spouse or partners' work conditions may affect health behaviors. Drawing on the constrained choices framework, we theorized health behaviors as a product of one's own time and spouses' work time as well as gender expectations. We examined fast food consumption and exercise behaviors using survey data from 429 employees in an Information Technology (IT) division of a U.S. Fortune 500 firm and from their spouses. We found fast food consumption is affected by men's work hours-both male employees' own work hours and the hours worked by husbands of women respondents-in a nonlinear way. The groups most likely to eat fast food are men working 50 h/week and women whose husbands work 45-50 h/week. Second, exercise is better explained if work time is conceptualized at the couple, rather than individual, level. In particular, neo-traditional arrangements (where husbands work longer than their wives) constrain women's ability to engage in exercise but increase odds of men exercising. Women in couples where both partners are working long hours have the highest odds of exercise. In addition, women working long hours with high schedule control are more apt to exercise and men working long hours whose wives have high schedule flexibility are as well. Our findings suggest different health behaviors may have distinct antecedents but gendered work-family expectations shape time allocations in ways that promote men's and constrain women's health behaviors. They also suggest the need to expand the constrained choices framework to recognize that long hours may encourage exercise if both partners are looking to sustain long work hours and that work resources, specifically schedule control, of one partner may expand the choices of the other. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Dynamic Convex Duality in Constrained Utility Maximization

    OpenAIRE

    Li, Yusong; Zheng, Harry

    2016-01-01

    In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also...

  13. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  14. MOOC Success Factors: Proposal of an Analysis Framework

    Directory of Open Access Journals (Sweden)

    Margarida M. Marques

    2017-10-01

    Full Text Available Aim/Purpose: From an idea of lifelong-learning-for-all to a phenomenon affecting higher education, Massive Open Online Courses (MOOCs can be the next step to a truly universal education. Indeed, MOOC enrolment rates can be astoundingly high; still, their completion rates are frequently disappointingly low. Nevertheless, as courses, the participants’ enrolment and learning within the MOOCs must be considered when assessing their success. In this paper, the authors’ aim is to reflect on what makes a MOOC successful to propose an analysis framework of MOOC success factors. Background: A literature review was conducted to identify reported MOOC success factors and to propose an analysis framework. Methodology: This literature-based framework was tested against data of a specific MOOC and refined, within a qualitative interpretivist methodology. The data were collected from the ‘As alterações climáticas nos média escolares - Clima@EduMedia’ course, which was developed by the project Clima@EduMedia and was submitted to content analysis. This MOOC aimed to support science and school media teachers in the use of media to teach climate change Contribution: By proposing a MOOC success factors framework the authors are attempting to contribute to fill in a literature gap regarding what concerns criteria to consider a specific MOOC successful. Findings: This work major finding is a literature-based and empirically-refined MOOC success factors analysis framework. Recommendations for Practitioners: The proposed framework is also a set of best practices relevant to MOOC developers, particularly when targeting teachers as potential participants. Recommendation for Researchers: This work’s relevance is also based on its contribution to increasing empirical research on MOOCs. Impact on Society: By providing a proposal of a framework on factors to make a MOOC successful, the authors hope to contribute to the quality of MOOCs. Future Research: Future

  15. Constraining the ensemble Kalman filter for improved streamflow forecasting

    Science.gov (United States)

    Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James

    2018-05-01

    Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also

  16. DISRUPTIVE EVENT BIOSPHERE DOSE CONVERSION FACTOR ANALYSIS

    International Nuclear Information System (INIS)

    M.A. Wasiolek

    2005-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The Biosphere Model Report (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objective of this analysis was to develop the BDCFs for the volcanic

  17. Bounds on the capacity of constrained two-dimensional codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    2000-01-01

    Bounds on the capacity of constrained two-dimensional (2-D) codes are presented. The bounds of Calkin and Wilf apply to first-order symmetric constraints. The bounds are generalized in a weaker form to higher order and nonsymmetric constraints. Results are given for constraints specified by run-l...

  18. Constrained control of a once-through boiler with recirculation

    DEFF Research Database (Denmark)

    Trangbæk, K

    2008-01-01

    There is an increasing need to operate power plants at low load for longer periods of time. When a once-through boiler operates at a sufficiently low load, recirculation is introduced, significantly altering the control structure. This paper illustrates the possibilities for using constrained con...

  19. Use of segmented constrained layer damping treatment for improved helicopter aeromechanical stability

    Science.gov (United States)

    Liu, Qiang; Chattopadhyay, Aditi; Gu, Haozhong; Liu, Qiang; Chattopadhyay, Aditi; Zhou, Xu

    2000-08-01

    The use of a special type of smart material, known as segmented constrained layer (SCL) damping, is investigated for improved rotor aeromechanical stability. The rotor blade load-carrying member is modeled using a composite box beam with arbitrary wall thickness. The SCLs are bonded to the upper and lower surfaces of the box beam to provide passive damping. A finite-element model based on a hybrid displacement theory is used to accurately capture the transverse shear effects in the composite primary structure and the viscoelastic and the piezoelectric layers within the SCL. Detailed numerical studies are presented to assess the influence of the number of actuators and their locations for improved aeromechanical stability. Ground and air resonance analysis models are implemented in the rotor blade built around the composite box beam with segmented SCLs. A classic ground resonance model and an air resonance model are used in the rotor-body coupled stability analysis. The Pitt dynamic inflow model is used in the air resonance analysis under hover condition. Results indicate that the surface bonded SCLs significantly increase rotor lead-lag regressive modal damping in the coupled rotor-body system.

  20. Psycho-social factors determining success in high-performance triathlon: compared perception in the coach-athlete pair.

    Science.gov (United States)

    Ruiz-Tendero, Germán; Salinero Martín, Juan José

    2012-12-01

    High-level sport can be analyzed using the complex system model, in which performance is constrained by many factors. Coaches' and athletes' perceptions of important positive and negative factors affecting performance were compared. Participants were 48 high-level international triathletes (n = 34) and their coaches (n = 14). They were personally interviewed via a questionnaire designed by four accredited experts, who selected groups of both positive and negative factors affecting performance. A list of factors was developed, in order of greater to lesser importance in the opinion of athletes and coaches, for subsequent analysis. Two ranked lists (positive and negative factors) indicated that athletes appear to rate personal environment factors (family, teammates, lack of support from relatives) higher, while the coaches tended to give more importance to technical and institutional aspects (institutional support, coach, medical support). There was complete agreement between coaches and triathletes about the top five positive factors. Negative factor agreement was somewhat lower (agreement on 3/5 factors). The most important positive factor for coaches and athletes was "dedication/engagement," while the most important factor adversely affecting performance was "injuries".

  1. Capital Cost Optimization for Prefabrication: A Factor Analysis Evaluation Model

    Directory of Open Access Journals (Sweden)

    Hong Xue

    2018-01-01

    Full Text Available High capital cost is a significant hindrance to the promotion of prefabrication. In order to optimize cost management and reduce capital cost, this study aims to explore the latent factors and factor analysis evaluation model. Semi-structured interviews were conducted to explore potential variables and then questionnaire survey was employed to collect professionals’ views on their effects. After data collection, exploratory factor analysis was adopted to explore the latent factors. Seven latent factors were identified, including “Management Index”, “Construction Dissipation Index”, “Productivity Index”, “Design Efficiency Index”, “Transport Dissipation Index”, “Material increment Index” and “Depreciation amortization Index”. With these latent factors, a factor analysis evaluation model (FAEM, divided into factor analysis model (FAM and comprehensive evaluation model (CEM, was established. The FAM was used to explore the effect of observed variables on the high capital cost of prefabrication, while the CEM was used to evaluate comprehensive cost management level on prefabrication projects. Case studies were conducted to verify the models. The results revealed that collaborative management had a positive effect on capital cost of prefabrication. Material increment costs and labor costs had significant impacts on production cost. This study demonstrated the potential of on-site management and standardization design to reduce capital cost. Hence, collaborative management is necessary for cost management of prefabrication. Innovation and detailed design were needed to improve cost performance. The new form of precast component factories can be explored to reduce transportation cost. Meanwhile, targeted strategies can be adopted for different prefabrication projects. The findings optimized the capital cost and improved the cost performance through providing an evaluation and optimization model, which helps managers to

  2. Changes of gait parameters following constrained-weight shift training in patients with stroke

    OpenAIRE

    Nam, Seok Hyun; Son, Sung Min; Kim, Kyoung

    2017-01-01

    [Purpose] This study aimed to investigate the effects of training involving compelled weight shift on the paretic lower limb on gait parameters and plantar pressure distribution in patients with stroke. [Subjects and Methods] Forty-five stroke patients participated in the study and were randomly divided into: group with a 5-mm lift on the non-paretic side for constrained weight shift training (5: constrained weight shift training) (n=15); group with a 10-mm lift on the non-paretic side for co...

  3. A social work study using factor analysis on detecting important factors creating stress: A case study of hydro-power employees

    Directory of Open Access Journals (Sweden)

    Batoul Aminjafari

    2012-08-01

    Full Text Available The study performs an empirical study based on the implementation of factor analysis to detect different factors influencing people to have more stress in a hydropower unit located in city of Esfahan, Iran. The study performed the survey among all 81 people who were working for customer service section of this company and consisted of two parts, in the first part; we gather all private information such as age, gender, education, job experience, etc. through seven important questions. In the second part of the survey, there were 66 questions, which included all the relevant factors impacting employees' stress. Cronbach alpha was calculated as 0.946, which is well above the minimum acceptable level. The implementation of factor analysis has detected 16 important groups of factors and each factor is determined by an appropriate name. The results of our factor analysis show that among different factors, difficulty of working condition as well as work pressure are two most important factors increasing stress among employees.

  4. Testing all six person-oriented principles in dynamic factor analysis.

    Science.gov (United States)

    Molenaar, Peter C M

    2010-05-01

    All six person-oriented principles identified by Sterba and Bauer's Keynote Article can be tested by means of dynamic factor analysis in its current form. In particular, it is shown how complex interactions and interindividual differences/intraindividual change can be tested in this way. In addition, the necessity to use single-subject methods in the analysis of developmental processes is emphasized, and attention is drawn to the possibility to optimally treat developmental psychopathology by means of new computational techniques that can be integrated with dynamic factor analysis.

  5. Exploratory factor analysis and reliability analysis with missing data: A simple method for SPSS users

    Directory of Open Access Journals (Sweden)

    Bruce Weaver

    2014-09-01

    Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.

  6. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    Science.gov (United States)

    Cheung, KW; So, HC; Ma, W.-K.; Chan, YT

    2006-12-01

    The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  7. A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization.

    Science.gov (United States)

    Qin, Sitian; Feng, Jiqiang; Song, Jiahui; Wen, Xingnan; Xu, Chen

    2018-03-01

    In this paper, based on calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.

  8. A Simply Constrained Optimization Reformulation of KKT Systems Arising from Variational Inequalities

    International Nuclear Information System (INIS)

    Facchinei, F.; Fischer, A.; Kanzow, C.; Peng, J.-M.

    1999-01-01

    The Karush-Kuhn-Tucker (KKT) conditions can be regarded as optimality conditions for both variational inequalities and constrained optimization problems. In order to overcome some drawbacks of recently proposed reformulations of KKT systems, we propose casting KKT systems as a minimization problem with nonnegativity constraints on some of the variables. We prove that, under fairly mild assumptions, every stationary point of this constrained minimization problem is a solution of the KKT conditions. Based on this reformulation, a new algorithm for the solution of the KKT conditions is suggested and shown to have some strong global and local convergence properties

  9. Likelihood-based Dynamic Factor Analysis for Measurement and Forecasting

    NARCIS (Netherlands)

    Jungbacker, B.M.J.P.; Koopman, S.J.

    2015-01-01

    We present new results for the likelihood-based analysis of the dynamic factor model. The latent factors are modelled by linear dynamic stochastic processes. The idiosyncratic disturbance series are specified as autoregressive processes with mutually correlated innovations. The new results lead to

  10. Extracting foreground-obscured μ-distortion anisotropies to constrain primordial non-Gaussianity

    Science.gov (United States)

    Remazeilles, M.; Chluba, J.

    2018-04-01

    Correlations between cosmic microwave background (CMB) temperature, polarization and spectral distortion anisotropies can be used as a probe of primordial non-Gaussianity. Here, we perform a reconstruction of μ-distortion anisotropies in the presence of Galactic and extragalactic foregrounds, applying the so-called Constrained ILC component separation method to simulations of proposed CMB space missions (PIXIE, LiteBIRD, CORE, PICO). Our sky simulations include Galactic dust, Galactic synchrotron, Galactic free-free, thermal Sunyaev-Zeldovich effect, as well as primary CMB temperature and μ-distortion anisotropies, the latter being added as correlated field. The Constrained ILC method allows us to null the CMB temperature anisotropies in the reconstructed μ-map (and vice versa), in addition to mitigating the contaminations from astrophysical foregrounds and instrumental noise. We compute the cross-power spectrum between the reconstructed (CMB-free) μ-distortion map and the (μ-free) CMB temperature map, after foreground removal and component separation. Since the cross-power spectrum is proportional to the primordial non-Gaussianity parameter, fNL, on scales k˜eq 740 Mpc^{-1}, this allows us to derive fNL-detection limits for the aforementioned future CMB experiments. Our analysis shows that foregrounds degrade the theoretical detection limits (based mostly on instrumental noise) by more than one order of magnitude, with PICO standing the best chance at placing upper limits on scale-dependent non-Gaussianity. We also discuss the dependence of the constraints on the channel sensitivities and chosen bands. Like for B-mode polarization measurements, extended coverage at frequencies ν ≲ 40 GHz and ν ≳ 400 GHz provides more leverage than increased channel sensitivity.

  11. Fringe instability in constrained soft elastic layers.

    Science.gov (United States)

    Lin, Shaoting; Cohen, Tal; Zhang, Teng; Yuk, Hyunwoo; Abeyaratne, Rohan; Zhao, Xuanhe

    2016-11-04

    Soft elastic layers with top and bottom surfaces adhered to rigid bodies are abundant in biological organisms and engineering applications. As the rigid bodies are pulled apart, the stressed layer can exhibit various modes of mechanical instabilities. In cases where the layer's thickness is much smaller than its length and width, the dominant modes that have been studied are the cavitation, interfacial and fingering instabilities. Here we report a new mode of instability which emerges if the thickness of the constrained elastic layer is comparable to or smaller than its width. In this case, the middle portion along the layer's thickness elongates nearly uniformly while the constrained fringe portions of the layer deform nonuniformly. When the applied stretch reaches a critical value, the exposed free surfaces of the fringe portions begin to undulate periodically without debonding from the rigid bodies, giving the fringe instability. We use experiments, theory and numerical simulations to quantitatively explain the fringe instability and derive scaling laws for its critical stress, critical strain and wavelength. We show that in a force controlled setting the elastic fingering instability is associated with a snap-through buckling that does not exist for the fringe instability. The discovery of the fringe instability will not only advance the understanding of mechanical instabilities in soft materials but also have implications for biological and engineered adhesives and joints.

  12. Changes in epistemic frameworks: Random or constrained?

    Directory of Open Access Journals (Sweden)

    Ananka Loubser

    2012-11-01

    Full Text Available Since the emergence of a solid anti-positivist approach in the philosophy of science, an important question has been to understand how and why epistemic frameworks change in time, are modified or even substituted. In contemporary philosophy of science three main approaches to framework-change were detected in the humanist tradition:1. In both the pre-theoretical and theoretical domains changes occur according to a rather constrained, predictable or even pre-determined pattern (e.g. Holton.2. Changes occur in a way that is more random or unpredictable and free from constraints (e.g. Kuhn, Feyerabend, Rorty, Lyotard.3. Between these approaches, a middle position can be found, attempting some kind of synthesis (e.g. Popper, Lakatos.Because this situation calls for clarification and systematisation, this article in fact tried to achieve more clarity on how changes in pre-scientific frameworks occur, as well as provided transcendental criticism of the above positions. This article suggested that the above-mentioned positions are not fully satisfactory, as change and constancy are not sufficiently integrated. An alternative model was suggested in which changes in epistemic frameworks occur according to a pattern, neither completely random nor rigidly constrained, which results in change being dynamic but not arbitrary. This alternative model is integral, rather than dialectical and therefore does not correspond to position three. 

  13. Low-lying excited states by constrained DFT

    Science.gov (United States)

    Ramos, Pablo; Pavanello, Michele

    2018-04-01

    Exploiting the machinery of Constrained Density Functional Theory (CDFT), we propose a variational method for calculating low-lying excited states of molecular systems. We dub this method eXcited CDFT (XCDFT). Excited states are obtained by self-consistently constraining a user-defined population of electrons, Nc, in the virtual space of a reference set of occupied orbitals. By imposing this population to be Nc = 1.0, we computed the first excited state of 15 molecules from a test set. Our results show that XCDFT achieves an accuracy in the predicted excitation energy only slightly worse than linear-response time-dependent DFT (TDDFT), but without incurring into problems of variational collapse typical of the more commonly adopted ΔSCF method. In addition, we selected a few challenging processes to test the limits of applicability of XCDFT. We find that in contrast to TDDFT, XCDFT is capable of reproducing energy surfaces featuring conical intersections (azobenzene and H3) with correct topology and correct overall energetics also away from the intersection. Venturing to condensed-phase systems, XCDFT reproduces the TDDFT solvatochromic shift of benzaldehyde when it is embedded by a cluster of water molecules. Thus, we find XCDFT to be a competitive method among single-reference methods for computations of excited states in terms of time to solution, rate of convergence, and accuracy of the result.

  14. Factor Structure and Gender Stability of the Brazilian Version of the Pornography Consumption Inventory.

    Science.gov (United States)

    Baltieri, Danilo Antonio; de Oliveira, Vitor Henrique; de Souza Gatti, Ana Luísa; Junqueira Aguiar, Ana Saito; de Souza Aranha E Silva, Renata Almeida

    2016-10-02

    There are a few instruments available to measure pornograhy consumption-related constructs, and this lack of instruments can compromise the validity of research findings. The Pornography Consumption Inventory (PCI) assesses four motivations for pornography consumption, and it has been validated in hypersexual men and medical students. However, whether the psychometric properties of this instrument are comparable across genders remains unclear. Multigroup confirmatory factor analysis (MGCFA) was used to verify the invariance of the structure of the PCI across male (100) and female (105) university students. The confirmatory factor analysis (CFA) for each group showed a reasonably good fit of the data to the four-factor model. The MGCFA model included only factor loadings constrained to be equal between both genders (ΔCFI 0.05). However, the ΔCFI did not support a strong and strict factorial invariance, ΔCFI > 0.01. Although both genders seemed to agree with the conceptualization of pornography and motivations for consuming it, the PCI was not gender-invariant, as men showed a stronger degree of motivation to consume pornographic material than women did. The implications of these findings regarding the measurement of motivations for pornography use are outlined.

  15. Splines and polynomial tools for flatness-based constrained motion planning

    Science.gov (United States)

    Suryawan, Fajar; De Doná, José; Seron, María

    2012-08-01

    This article addresses the problem of trajectory planning for flat systems with constraints. Flat systems have the useful property that the input and the state can be completely characterised by the so-called flat output. We propose a spline parametrisation for the flat output, the performance output, the states and the inputs. Using this parametrisation the problem of constrained trajectory planning can be cast into a simple quadratic programming problem. An important result is that the B-spline parametrisation used gives exact results for constrained linear continuous-time system. The result is exact in the sense that the constrained signal can be made arbitrarily close to the boundary without having intersampling issues (as one would have in sampled-data systems). Simulation examples are presented, involving the generation of rest-to-rest trajectories. In addition, an experimental result of the method is also presented, where two methods to generate trajectories for a magnetic-levitation (maglev) system in the presence of constraints are compared and each method's performance is discussed. The first method uses the nonlinear model of the plant, which turns out to belong to the class of flat systems. The second method uses a linearised version of the plant model around an operating point. In every case, a continuous-time description is used. The experimental results on a real maglev system reported here show that, in most scenarios, the nonlinear and linearised models produce almost similar, indistinguishable trajectories.

  16. Reduction of false positives in the detection of architectural distortion in mammograms by using a geometrically constrained phase portrait model

    International Nuclear Information System (INIS)

    Ayres, Fabio J.; Rangayyan, Rangaraj M.

    2007-01-01

    Objective One of the commonly missed signs of breast cancer is architectural distortion. We have developed techniques for the detection of architectural distortion in mammograms, based on the analysis of oriented texture through the application of Gabor filters and a linear phase portrait model. In this paper, we propose constraining the shape of the general phase portrait model as a means to reduce the false-positive rate in the detection of architectural distortion. Material and methods The methods were tested with one set of 19 cases of architectural distortion and 41 normal mammograms, and with another set of 37 cases of architectural distortion. Results Sensitivity rates of 84% with 4.5 false positives per image and 81% with 10 false positives per image were obtained for the two sets of images. Conclusion The adoption of a constrained phase portrait model with a symmetric matrix and the incorporation of its condition number in the analysis resulted in a reduction in the false-positive rate in the detection of architectural distortion. The proposed techniques, dedicated for the detection and localization of architectural distortion, should lead to efficient detection of early signs of breast cancer. (orig.)

  17. Training department's role in human factor analysis during post-trip reviews

    International Nuclear Information System (INIS)

    Goodman, D.

    1987-01-01

    Provide training is a frequent corrective action specified in a post-trip review report. This corrective action is most often decided upon by technical and operational staff, not training staff, without a detailed analysis of whether training can resolve the immediate problem or enhance employees' future performance. A more specific human factor or performance problem analysis would often reveal that training cannot impact or resolve the concern to avoid future occurrences. This human factor analysis is similar to Thomas Gilbert's Behavior Engineering Model (Human Competence, McGraw-Hill, 1978) or Robert Mager's/Peter Pipe's Performance Analysis (Analyzing Performance Problems, Pitman Learning, 1984). At Palo Verde Nuclear Generating Station, training analysts participate in post-trip reviews in order to conduct or provide input to this type of human factor and performance problem analysis. Their goal is to keep provide training out of corrective action statements unless training can in fact impact or resolve the problem. The analysts follow a plant specific logic diagram to identify human factors and to identify whether changes to the environment or to the person would best resolve the concern

  18. The Balance of Payment-Constrained Economic Growth in Ethiopia ...

    African Journals Online (AJOL)

    Administrator

    Page 100 financial liberalization and export promotion strategy necessarily lead to better growth performance. Rather, one should consider not only exports of goods and services, but also the income elasticity of imports. The balance of payments-constrained growth model postulates that the rate of growth in any country is ...

  19. Anomalous ortho-para conversion of solid hydrogen in constrained geometries

    International Nuclear Information System (INIS)

    Rall, M.; Brison, J.P.; Sullivan, N.S.

    1991-01-01

    Using cw NMR techniques, we have measured the ortho-para conversion of solid hydrogen constrained to the interior of the molecular cages of zeolite. The conversion observed in the constrained geometry is very different from that of bulk solid hydrogen. Two distinct conversion rates were observed for short and long times. An apparently bimolecular conversion rate of 0.43% h -1 (one-fourth of the bulk value) dominates during the first 500 h, and the rate then increases to 2.2% h -1 . The initial slow rate is explained in terms of a reduced number of nearest neighbors and possible wall effects, and the fast rate is attributed to the formation of small ortho-H 2 Rclusters at later times. Surface effects due to magnetic impurities do not appear to determine the conversion rate in the samples studied

  20. The Combinatorial Multi-Mode Resource Constrained Multi-Project Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Denis Pinha

    2016-11-01

    Full Text Available This paper presents the formulation and solution of the Combinatorial Multi-Mode Resource Constrained Multi-Project Scheduling Problem. The focus of the proposed method is not on finding a single optimal solution, instead on presenting multiple feasible solutions, with cost and duration information to the project manager. The motivation for developing such an approach is due in part to practical situations where the definition of optimal changes on a regular basis. The proposed approach empowers the project manager to determine what is optimal, on a given day, under the current constraints, such as, change of priorities, lack of skilled worker. The proposed method utilizes a simulation approach to determine feasible solutions, under the current constraints. Resources can be non-consumable, consumable, or doubly constrained. The paper also presents a real-life case study dealing with scheduling of ship repair activities.