Method for exploiting bias in factor analysis using constrained alternating least squares algorithms
Keenan, Michael R.
2008-12-30
Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.
Client's Constraining Factors to Construction Project Management
African Journals Online (AJOL)
factors as a significant system that constrains project management success of public and ... finance for the project and prompt payment for work executed; clients .... consideration of the loading patterns of these variables, the major factor is ...
Directory of Open Access Journals (Sweden)
Blerta Dragusha (Spahija
2013-01-01
Determining the factors that attract FDI, and furthermore identify the main characteristics of the host country’s economy, are essential to understand the reason of FDI inflows to a country or region. In the empirical perspective, various studies give different results. More specifically, this paper has focused on determining the factors for and against FDI in Albania.
Sakyi, E Kojo
2008-01-01
Ghana has undertaken many public service management reforms in the past two decades. But the implementation of the reforms has been constrained by many factors. This paper undertakes a retrospective study of research works on the challenges to the implementation of reforms in the public health sector. It points out that most of the studies identified: (1) centralised, weak and fragmented management system; (2) poor implementation strategy; (3) lack of motivation; (4) weak institutional framework; (5) lack of financial and human resources and (6) staff attitude and behaviour as the major causes of ineffective reform implementation. The analysis further revealed that quite a number of crucial factors obstructing reform implementation which are particularly internal to the health system have either not been thoroughly studied or overlooked. The analysis identified lack of leadership; weak communication and consultation; lack of stakeholder participation, corruption and unethical professional behaviour as some of the missing variables in the literature. The study, therefore, indicated that there are gaps in the literature that needed to be filled through rigorous reform evaluation based on empirical research particularly at district, sub-district and community levels. It further suggested that future research should be concerned with the effects of both systems and structures and behavioural factors on reform implementation.
Directory of Open Access Journals (Sweden)
Wenhao Yu
Full Text Available The urban facility, one of the most important service providers is usually represented by sets of points in GIS applications using POI (Point of Interest model associated with certain human social activities. The knowledge about distribution intensity and pattern of facility POIs is of great significance in spatial analysis, including urban planning, business location choosing and social recommendations. Kernel Density Estimation (KDE, an efficient spatial statistics tool for facilitating the processes above, plays an important role in spatial density evaluation, because KDE method considers the decay impact of services and allows the enrichment of the information from a very simple input scatter plot to a smooth output density surface. However, the traditional KDE is mainly based on the Euclidean distance, ignoring the fact that in urban street network the service function of POI is carried out over a network-constrained structure, rather than in a Euclidean continuous space. Aiming at this question, this study proposes a computational method of KDE on a network and adopts a new visualization method by using 3-D "wall" surface. Some real conditional factors are also taken into account in this study, such as traffic capacity, road direction and facility difference. In practical works the proposed method is implemented in real POI data in Shenzhen city, China to depict the distribution characteristic of services under impacts of multi-factors.
Client's constraining factors to construction project management ...
African Journals Online (AJOL)
This study analyzed client's related factors that constrain project management success of public and private sector construction in Nigeria. Issues that concern clients in any project can not be undermined as they are the owners and the initiators of project proposals. It is assumed that success, failure or abandonment of ...
Directory of Open Access Journals (Sweden)
Tanya Baycheva-Merger
2018-03-01
Full Text Available Adequate and accessible expert-based forest information has become increasingly in demand for effective decisions and informed policies in the forest and forest-related sectors in Europe. Such accessibility requires a collaborative environment and constant information exchange between various actors at different levels and across sectors. However, information exchange in complex policy environments is challenging, and is often constrained by various institutional, actor-oriented, and technical factors. In forest policy research, no study has yet attempted to simultaneously account for these multiple factors influencing expert-based forest information exchange. By employing a policy analysis from an actor-centred institutionalist perspective, this paper aims to provide an overview of the most salient institutional and actor-oriented factors that are perceived as constraining forest information exchange at the national level across European countries. We employ an exploratory research approach, and utilise both qualitative and quantitative methods to analyse our data. The data was collected through a semi-structured survey targeted at forest and forest-related composite actors in 21 European countries. The results revealed that expert-based forest information exchange is constrained by a number of compound and closely interlinked institutional and actor-oriented factors, reflecting the complex interplay of institutions and actors at the national level. The most salient institutional factors that stand out include restrictive or ambiguous data protection policies, inter-organisational information arrangements, different organisational cultures, and a lack of incentives. Forest information exchange becomes even more complex when actors are confronted with actor-oriented factors such as issues of distrust, diverging preferences and perceptions, intellectual property rights, and technical capabilities. We conclude that expert-based forest information
Constrained principal component analysis and related techniques
Takane, Yoshio
2013-01-01
In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre
Factors constraining accessibility and usage of information among ...
African Journals Online (AJOL)
Various factors may negatively impact on information acquisition and utilisation. To improve understanding of the determinants of information acquisition and utilisation, this study investigated the factors constraining accessibility and usage of poultry management information in three rural districts of Tanzania. The findings ...
Multiplicative algorithms for constrained non-negative matrix factorization
Peng, Chengbin
2012-12-01
Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc. In this paper, we provide an algorithm that allows the factorization to have linear or approximatly linear constraints with respect to each factor. We prove that if the constraint function is linear, algorithms within our multiplicative framework will converge. This theory supports a large variety of equality and inequality constraints, and can facilitate application of NMF to a much larger domain. Taking the recommender system as an example, we demonstrate how a specialized weighted and constrained NMF algorithm can be developed to fit exactly for the problem, and the tests justify that our constraints improve the performance for both weighted and unweighted NMF algorithms under several different metrics. In particular, on the Movielens data with 94% of items, the Constrained NMF improves recall rate 3% compared to SVD50 and 45% compared to SVD150, which were reported as the best two in the top-N metric. © 2012 IEEE.
Constrained mathematics evaluation in probabilistic logic analysis
Energy Technology Data Exchange (ETDEWEB)
Arlin Cooper, J
1998-06-01
A challenging problem in mathematically processing uncertain operands is that constraints inherent in the problem definition can require computations that are difficult to implement. Examples of possible constraints are that the sum of the probabilities of partitioned possible outcomes must be one, and repeated appearances of the same variable must all have the identical value. The latter, called the 'repeated variable problem', will be addressed in this paper in order to show how interval-based probabilistic evaluation of Boolean logic expressions, such as those describing the outcomes of fault trees and event trees, can be facilitated in a way that can be readily implemented in software. We will illustrate techniques that can be used to transform complex constrained problems into trivial problems in most tree logic expressions, and into tractable problems in most other cases.
Kugelberg, Susanna; Jonsdottir, Svandis; Faxelid, Elisabeth; Jönsson, Kristina; Fox, Ann; Thorsdottir, Inga; Yngve, Agneta
2012-11-01
Little is known about current public health nutrition workforce development in Europe. The present study aimed to understand constraining and enabling factors to workforce development in seven European countries. A qualitative study comprised of semi-structured face-to-face interviews was conducted and content analysis was used to analyse the transcribed interview data. The study was carried out in Finland, Iceland, Ireland, Slovenia, Spain, Sweden and the UK. Sixty key informants participated in the study. There are constraining and enabling factors for public health nutrition workforce development. The main constraining factors relate to the lack of a supportive policy environment, fragmented organizational structures and a workforce that is not cohesive enough to implement public health nutrition strategic initiatives. Enabling factors were identified as the presence of skilled and dedicated individuals who assume roles as leaders and change agents. There is a need to strengthen coordination between policy and implementation of programmes which may operate across the national to local spectrum. Public health organizations are advised to further define aims and objectives relevant to public health nutrition. Leaders and agents of change will play important roles in fostering intersectorial partnerships, advocating for policy change, establishing professional competencies and developing education and training programmes.
Factorization of Constrained Energy K-Network Reliability with Perfect Nodes
Burgos, Juan Manuel
2013-01-01
This paper proves a new general K-network constrained energy reliability global factorization theorem. As in the unconstrained case, beside its theoretical mathematical importance the theorem shows how to do parallel processing in exact network constrained energy reliability calculations in order to reduce the processing time of this NP-hard problem. Followed by a new simple factorization formula for its calculation, we propose a new definition of constrained energy network reliability motiva...
Constrained relationship agency as the risk factor for intimate ...
African Journals Online (AJOL)
We used structural equation modelling to identify and measure constrained relationship agency (CRA) as a latent variable, and then tested the hypothesis that CRA plays a significant role in the pathway between IPV and transactional sex. After controlling for CRA, receiving more material goods from a sexual partner was ...
Dimensionally constrained energy confinement analysis of W7-AS data
International Nuclear Information System (INIS)
Dose, V.; Preuss, R.; Linden, W. von der
1998-01-01
A recently assembled W7-AS stellarator database has been subject to dimensionally constrained confinement analysis. The analysis employs Bayesian inference. Dimensional information is taken from the Connor-Taylor (CT) similarity transformation theory, which provides six possible physical scenarios with associated dimensional conditions. Bayesian theory allows the calculations of the probability for each model and it is found that the present W7-AS data are most probably described by the collisionless high-β case. Probabilities for all models and the associated exponents of a power law scaling function are presented. (author)
Gorsuch, Richard L
2013-01-01
Comprehensive and comprehensible, this classic covers the basic and advanced topics essential for using factor analysis as a scientific tool in psychology, education, sociology, and related areas. Emphasizing the usefulness of the techniques, it presents sufficient mathematical background for understanding and sufficient discussion of applications for effective use. This includes not only theory but also the empirical evaluations of the importance of mathematical distinctions for applied scientific analysis.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements
Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
Multiplicative algorithms for constrained non-negative matrix factorization
Peng, Chengbin; Wong, Kachun; Rockwood, Alyn; Zhang, Xiangliang; Jiang, Jinling; Keyes, David E.
2012-01-01
Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc
Constrained Fisher Scoring for a Mixture of Factor Analyzers
2016-09-01
where ω ∈ [0, 4π). Each observation of the spiral is corrupted by additive white Gaussian noise with unit variance. This model was used in previous works...that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information...from different aspects and then learn a joint statistical model for the object manifold. We employ a mixture of factor analyzers model and derive a
Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.
Böing-Messing, Florian; Mulder, Joris
2018-05-03
In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.
Jääskelä, Päivikki; Häkkinen, Päivi; Rasku-Puttonen, Helena
2017-01-01
Higher education calls for reform, but deeper knowledge about the prerequisites for teaching development and pedagogical change is missing. In this study, 51 university teachers' experiences of supportive or constraining factors in teaching development were investigated in the context of Finland's multidisciplinary network. The findings reveal…
Stegeman, Alwin; De Almeida, Andre L. F.
2009-01-01
In this paper, we derive uniqueness conditions for a constrained version of the parallel factor (Parafac) decomposition, also known as canonical decomposition (Candecomp). Candecomp/Parafac (CP) decomposes a three-way array into a prespecified number of outer product arrays. The constraint is that
Agbowuro G.O
2012-01-01
The objective of this work is to identify and examine major factors constraining pawpaw production and marketing in Ekiti State, Southwestern Nigeria. Questionnaire schedule and personal interviews were used to collect data from ten Local Government Areas in the state. A total of 76 pawpaw farmers were randomly interviewed for this study. The study identified poor patronage in the market, poor marketing system, inadequate capital, poor price, inadequate extension services, poor transportation...
Generation and Analysis of Constrained Random Sampling Patterns
DEFF Research Database (Denmark)
Pierzchlewski, Jacek; Arildsen, Thomas
2016-01-01
Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...... indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed....
Portuguese pellets market: Analysis of the production and utilization constrains
International Nuclear Information System (INIS)
Monteiro, Eliseu; Mantha, Vishveshwar; Rouboa, Abel
2012-01-01
As opposite in Portugal, the wood pellets market is booming in Europe. In this work, possible reasons for this market behavior are foreseen according to the key indicators of biomass availability, costs and legal framework. Two major constrains are found in the Portuguese pellets market: the first one is the lack of an internal consumption, being the market based on exportations. The second one is the shortage of raw material mainly due to the competition with the biomass power plants. Therefore, the combination of the biomass power plants with pellet production plants seems to be the best option for the pellets production in the actual Portuguese scenario. The main constrains for pellets market has been to convince small-scale customers that pellets are a good alternative fuel, mainly due to the investment needed and the strong competition with natural gas. Besides some benefits in the acquisition of new equipment for renewable energy, they are insufficient to cover the huge discrepancy of the investment in pellets heating. However, pellets are already economic interesting for large utilizations. In order cover a large amount of households, additional public support is needed to cover the supplementary costs of the pellets heating systems. - Highlights: ► There is a lack of internal consumption being the pellets market based on exportation. ► The shortage of raw material is mainly due to the biomass power plants. ► Combining pellet plants with biomass power plants seems to be a wise solution. ► The tax benefits of renewable energy equipments are not enough to cover the higher investment. ► Pellets are already economic interesting for large utilizations in the Portuguese scenario.
Lu, Na; Li, Tengfei; Pan, Jinjin; Ren, Xiaodong; Feng, Zuren; Miao, Hongyu
2015-05-01
Electroencephalogram (EEG) provides a non-invasive approach to measure the electrical activities of brain neurons and has long been employed for the development of brain-computer interface (BCI). For this purpose, various patterns/features of EEG data need to be extracted and associated with specific events like cue-paced motor imagery. However, this is a challenging task since EEG data are usually non-stationary time series with a low signal-to-noise ratio. In this study, we propose a novel method, called structure constrained semi-nonnegative matrix factorization (SCS-NMF), to extract the key patterns of EEG data in time domain by imposing the mean envelopes of event-related potentials (ERPs) as constraints on the semi-NMF procedure. The proposed method is applicable to general EEG time series, and the extracted temporal features by SCS-NMF can also be combined with other features in frequency domain to improve the performance of motor imagery classification. Real data experiments have been performed using the SCS-NMF approach for motor imagery classification, and the results clearly suggest the superiority of the proposed method. Comparison experiments have also been conducted. The compared methods include ICA, PCA, Semi-NMF, Wavelets, EMD and CSP, which further verified the effectivity of SCS-NMF. The SCS-NMF method could obtain better or competitive performance over the state of the art methods, which provides a novel solution for brain pattern analysis from the perspective of structure constraint. Copyright © 2015 Elsevier Ltd. All rights reserved.
Giva, Karen R N; Duma, Sinegugu E
2015-08-31
Problem-based learning (PBL) was introduced in Malawi in 2002 in order to improve the nursing education system and respond to the acute nursing human resources shortage. However, its implementation has been very slow throughout the country. The objectives of the study were to explore and describe the goals that were identified by the college to facilitate the implementation of PBL, the resources of the organisation that facilitated the implementation of PBL, the factors related to sources of students that facilitated the implementation of PBL, and the influence of the external system of the organisation on facilitating the implementation of PBL, and to identify critical success factors that could guide the implementation of PBL in nursing education in Malawi. This is an ethnographic, exploratory and descriptive qualitative case study. Purposive sampling was employed to select the nursing college, participants and documents for review.Three data collection methods, including semi-structured interviews, participant observation and document reviews, were used to collect data. The four steps of thematic analysis were used to analyse data from all three sources. Four themes and related subthemes emerged from the triangulated data sources. The first three themes and their subthemes are related to the characteristics related to successful implementation of PBL in a human resource-constrained nursing college, whilst the last theme is related to critical success factors that contribute to successful implementation of PBL in a human resource-constrained country like Malawi. This article shows that implementation of PBL is possible in a human resource-constrained country if there is political commitment and support.
Chen, Jun; Bushman, Frederic D; Lewis, James D; Wu, Gary D; Li, Hongzhe
2013-04-01
Motivated by studying the association between nutrient intake and human gut microbiome composition, we developed a method for structure-constrained sparse canonical correlation analysis (ssCCA) in a high-dimensional setting. ssCCA takes into account the phylogenetic relationships among bacteria, which provides important prior knowledge on evolutionary relationships among bacterial taxa. Our ssCCA formulation utilizes a phylogenetic structure-constrained penalty function to impose certain smoothness on the linear coefficients according to the phylogenetic relationships among the taxa. An efficient coordinate descent algorithm is developed for optimization. A human gut microbiome data set is used to illustrate this method. Both simulations and real data applications show that ssCCA performs better than the standard sparse CCA in identifying meaningful variables when there are structures in the data.
Directory of Open Access Journals (Sweden)
Tsui-Er Lee
2014-01-01
Full Text Available The effects of cooperative learning and traditional learning on the effectiveness and constraining factors of physical fitness teaching under various teaching conditions were studied. Sixty female students in Grades 7–8 were sampled to evaluate their learning of health and physical education (PE according to the curriculum for Grades 1–9 in Taiwan. The data were quantitatively and qualitatively collected and analyzed. The overall physical fitness of the cooperative learning group exhibited substantial progress between the pretest and posttest, in which the differences in the sit-and-reach and bent-knee sit-up exercises achieved statistical significance. The performance of the cooperative learning group in the bent-knee sit-up and 800 m running exercises far exceeded that of the traditional learning group. Our qualitative data indicated that the number of people grouped before a cooperative learning session, effective administrative support, comprehensive teaching preparation, media reinforcement, constant feedback and introspection regarding cooperative learning strategies, and heterogeneous grouping are constraining factors for teaching PE by using cooperative learning strategies. Cooperative learning is considered an effective route for attaining physical fitness among students. PE teachers should consider providing extrinsic motivation for developing learning effectiveness.
Comparative Analysis of Uninhibited and Constrained Avian Wing Aerodynamics
Cox, Jordan A.
The flight of birds has intrigued and motivated man for many years. Bird flight served as the primary inspiration of flying machines developed by Leonardo Da Vinci, Otto Lilienthal, and even the Wright brothers. Avian flight has once again drawn the attention of the scientific community as unmanned aerial vehicles (UAV) are not only becoming more popular, but smaller. Birds are once again influencing the designs of aircraft. Small UAVs operating within flight conditions and low Reynolds numbers common to birds are not yet capable of the high levels of control and agility that birds display with ease. Many researchers believe the potential to improve small UAV performance can be obtained by applying features common to birds such as feathers and flapping flight to small UAVs. Although the effects of feathers on a wing have received some attention, the effects of localized transient feather motion and surface geometry on the flight performance of a wing have been largely overlooked. In this research, the effects of freely moving feathers on a preserved red tailed hawk wing were studied. A series of experiments were conducted to measure the aerodynamic forces on a hawk wing with varying levels of feather movement permitted. Angle of attack and air speed were varied within the natural flight envelope of the hawk. Subsequent identical tests were performed with the feather motion constrained through the use of externally-applied surface treatments. Additional tests involved the study of an absolutely fixed geometry mold-and-cast wing model of the original bird wing. Final tests were also performed after applying surface coatings to the cast wing. High speed videos taken during tests revealed the extent of the feather movement between wing models. Images of the microscopic surface structure of each wing model were analyzed to establish variations in surface geometry between models. Recorded aerodynamic forces were then compared to the known feather motion and surface
Directory of Open Access Journals (Sweden)
RADEN RIJANTA
2013-01-01
Full Text Available Local food crops are believed to be important alternatives in facing the problems of continuously growing price of food stuff worldwide. There has been a strong bias in national agricultural development policy towards the production of rice as staple food in Indonesia. Local food crops have been neglected in the agricultural development policy in the last 50 years, leading to the dependency on imported commodities and creating a vulnerability in the national food security. This paper aims at assessing the factors constraining local food production in Indonesia based on empirical experiences drawn from a research in Kulon Progo Regency, Yogyakarta Province. The government of Kulon Progo Regency has declared its commitment in the development of local food commodities as a part of its agricultural development policy, as it is mentioned in the long-term and medium-term development planning documents. There is also a head regency decree mandating the use of local food commodities in any official events organized by the government organisations. The research shows that there are at least six policy-related problems and nine technical factors constraining local food crops production in the regency. Some of the policy-related and structural factors hampering the production of local food crops consist of (1 long-term policy biases towards rice, (2 strong biases on rice diet in the community, (3 difficulties in linking policy to practices, (4 lack of information on availability of local food crops across the regency and (5 external threat from the readily available instant food on local market and (6 past contra-productive policy to the production of local food crops. The technical factors constraining local food production comprises (1 inferiority of the food stuff versus the instantly prepared food, (2 difficulty in preparation and risk of contagion of some crops, lack of technology for processing, (3 continuity of supply (some crops are seasonally
Foundations of factor analysis
Mulaik, Stanley A
2009-01-01
Introduction Factor Analysis and Structural Theories Brief History of Factor Analysis as a Linear Model Example of Factor AnalysisMathematical Foundations for Factor Analysis Introduction Scalar AlgebraVectorsMatrix AlgebraDeterminants Treatment of Variables as Vectors Maxima and Minima of FunctionsComposite Variables and Linear Transformations Introduction Composite Variables Unweighted Composite VariablesDifferentially Weighted Composites Matrix EquationsMulti
Analysis and Transformation Tools for Constrained Horn Clause Verification
DEFF Research Database (Denmark)
Kafle, Bishoksan; Gallagher, John Patrick
2014-01-01
Several techniques and tools have been developed for verification of properties expressed as Horn clauses with constraints over a background theory (CHC). Current CHC verification tools implement intricate algorithms and are often limited to certain subclasses of CHC problems. Our aim in this work...... is to investigate the use of a combination of off-the-shelf techniques from the literature in analysis and transformation of Constraint Logic Programs (CLPs) to solve challenging CHC verification problems. We find that many problems can be solved using a combination of tools based on well-known techniques from...... abstract interpretation, semantics-preserving transformations, program specialisation and query-answer transformations. This gives insights into the design of automatic, more general CHC verification tools based on a library of components....
Constrained posture in dentistry - a kinematic analysis of dentists.
Ohlendorf, Daniela; Erbe, Christina; Nowak, Jennifer; Hauck, Imke; Hermanns, Ingo; Ditchen, Dirk; Ellegast, Rolf; Groneberg, David A
2017-07-05
How a dentist works, such as the patterns of movements performed daily, is also largely affected by the workstation Dental tasks are often executed in awkward body positions, thereby causing a very high degree of strain on the corresponding muscles. The objective of this study is to detect those dental tasks, during which awkward postures occur most frequently. The isolated analysis of static postures will examine the duration for which these postures are maintained during the corresponding dental, respectively non-dental, activities. 21 (11f/10 m) dentists (age: 40.1 ± 10.4 years) participated in this study. An average dental workday was collected for every subject. To collect kinematic data of all activities, the CUELA system was used. Parallel to the kinematic examination, a detailed computer-based task analysis was conducted. Afterwards, both data sets were synchronized based on the chronological order of the postures assumed in the trunk and the head region. All tasks performed were assigned to the categories "treatment" (I), "office" (II) and "other activities" (III). The angle values of each body region (evaluation parameter) were examined and assessed corresponding to ergonomic standards. Moreover, this study placed a particular focus on static positions, which are held statically for 4 s and longer. For "treatment" (I), the entire head and trunk area is anteriorly tilted while the back is twisted to the right, in (II) and (III) the back is anteriorly tilted and twisted to the right (non-neutral position). Static positions in (I) last for 4-10s, static postures (approx. 60%) can be observed while in (II) and (III) in the back area static positions for more than 30 s are most common. Moreover, in (II) the back is twisted to the right for more than 60 s in 26.8%. Awkward positions are a major part of a dentists' work. This mainly pertains to static positions of the trunk and head in contrast to "office work." These insights facilitate the quantitative
k-t PCA: temporally constrained k-t BLAST reconstruction using principal component analysis
DEFF Research Database (Denmark)
Pedersen, Henrik; Kozerke, Sebastian; Ringgaard, Steffen
2009-01-01
in applications exhibiting a broad range of temporal frequencies such as free-breathing myocardial perfusion imaging. We show that temporal basis functions calculated by subjecting the training data to principal component analysis (PCA) can be used to constrain the reconstruction such that the temporal resolution...... is improved. The presented method is called k-t PCA....
Directory of Open Access Journals (Sweden)
Mark Borman
2006-11-01
Full Text Available In a wide range of industries services are increasingly being developed, or evolving, to support groups of organisations. Not all such joint service initiatives though have been successful. The paper aims to highlight potential issues that need to be addressed when investigating the introduction of a joint service by identifying the motivators and constraints. The approach outlined draws upon network externality theory to provide the motivation for a joint service, and resource based and dependency theories to highlight the constraining factors. Three instances of joint services – in the Banking, Telecommunications and Travel sectors – are subsequently examined. It is concluded that as well as providing externality benefits joint service initiatives can also improve the terms of access to a service – in particular through realising economies of scale. Furthermore it would appear that organisations will have to think carefully about the best way to create, structure and manage a joint service initiative – including who to partner with – given their own particular circumstances, as multiple alternative approaches, with potentially differing ramifications, are available.
Analysis of multi cloud storage applications for resource constrained mobile devices
Directory of Open Access Journals (Sweden)
Rajeev Kumar Bedi
2016-09-01
Full Text Available Cloud storage, which can be a surrogate for all physical hardware storage devices, is a term which gives a reflection of an enormous advancement in engineering (Hung et al., 2012. However, there are many issues that need to be handled when accessing cloud storage on resource constrained mobile devices due to inherent limitations of mobile devices as limited storage capacity, processing power and battery backup (Yeo et al., 2014. There are many multi cloud storage applications available, which handle issues faced by single cloud storage applications. In this paper, we are providing analysis of different multi cloud storage applications developed for resource constrained mobile devices to check their performance on the basis of parameters as battery consumption, CPU usage, data usage and time consumed by using mobile phone device Sony Xperia ZL (smart phone on WiFi network. Lastly, conclusion and open research challenges in these multi cloud storage apps are discussed.
International Nuclear Information System (INIS)
Guo, Rui-Yun; Zhang, Xin
2016-01-01
The nature of dark energy affects the Hubble expansion rate (namely, the expansion history) H(z) by an integral over w(z). However, the usual observables are the luminosity distances or the angular diameter distances, which measure the distance.redshift relation. Actually, the property of dark energy affects the distances (and the growth factor) by a further integration over functions of H(z). Thus, the direct measurements of the Hubble parameter H(z) at different redshifts are of great importance for constraining the properties of dark energy. In this paper, we show how the typical dark energy models, for example, the ΛCDM, wCDM, CPL, and holographic dark energy models, can be constrained by the current direct measurements of H(z) (31 data used in total in this paper, covering the redshift range of z @ element of [0.07, 2.34]). In fact, the future redshift-drift observations (also referred to as the Sandage-Loeb test) can also directly measure H(z) at higher redshifts, covering the range of z @ element of [2, 5]. We thus discuss what role the redshift-drift observations can play in constraining dark energy with the Hubble parameter measurements. We show that the constraints on dark energy can be improved greatly with the H(z) data from only a 10-year observation of redshift drift. (orig.)
CSIR Research Space (South Africa)
Ouma, S
2011-11-01
Full Text Available the primary healthcare levels in order to improve the delivery of services within various communities. They further provide the issues that the mhealth service providers should take into account when providing m-health solutions to the resource constrained...
The Smoothing Artifact of Spatially Constrained Canonical Correlation Analysis in Functional MRI
Directory of Open Access Journals (Sweden)
Dietmar Cordes
2012-01-01
Full Text Available A wide range of studies show the capacity of multivariate statistical methods for fMRI to improve mapping of brain activations in a noisy environment. An advanced method uses local canonical correlation analysis (CCA to encompass a group of neighboring voxels instead of looking at the single voxel time course. The value of a suitable test statistic is used as a measure of activation. It is customary to assign the value to the center voxel; however, this is a choice of convenience and without constraints introduces artifacts, especially in regions of strong localized activation. To compensate for these deficiencies, different spatial constraints in CCA have been introduced to enforce dominance of the center voxel. However, even if the dominance condition for the center voxel is satisfied, constrained CCA can still lead to a smoothing artifact, often called the “bleeding artifact of CCA”, in fMRI activation patterns. In this paper a new method is introduced to measure and correct for the smoothing artifact for constrained CCA methods. It is shown that constrained CCA methods corrected for the smoothing artifact lead to more plausible activation patterns in fMRI as shown using data from a motor task and a memory task.
International Nuclear Information System (INIS)
Smeets, Niels
2017-01-01
In 2009, the Russian government set its first quantitative renewable energy target at 4.5% of the total electricity produced and consumed by 2020. In 2013, the Government launched its capacity-based renewable energy support scheme (CRESS), however, the expects it will merely add 0.3% to the current 0.67% share of renewables (Ministry of Energy, 2016c). This raises the question what factors might explain this implementation gap. On the basis of field research in Moscow, the article offers an in-depth policy analysis of resource-geographic, financial, institutional and ecologic enabling and constraining factors of Russia's CRESS between 2009 and 2015. To avoid the trap that policy intentions remain on paper, the entire policy cycle – from goal setting to implementation – has been covered. The article concludes that wind energy, which would have contributed the lion's share of new renewable energy capacity, lags behind, jeopardizing the quantitative renewable energy target. The depreciation of the rouble decreased return on investment, and the Local Content Requirement discouraged investors given the lack of Russian wind production facilities. Contrary to resource-geographic and financial expectations, solar projects have been commissioned more accurately, benefitting from access to major business groups and existing production facilities. - Highlights: • The support scheme is focused on the oversupplied integrated electricity market. • The scheme disregards the technical and economic potential in isolated areas. • The solar industry develops at the fastest rate, wind and small hydro lag behind. • Access to business groups and production facilities condition implementation. • The devaluation of the rouble necessitated a revision of the policy design.
Factor analysis and scintigraphy
International Nuclear Information System (INIS)
Di Paola, R.; Penel, C.; Bazin, J.P.; Berche, C.
1976-01-01
The goal of factor analysis is usually to achieve reduction of a large set of data, extracting essential features without previous hypothesis. Due to the development of computerized systems, the use of largest sampling, the possibility of sequential data acquisition and the increase of dynamic studies, the problem of data compression can be encountered now in routine. Thus, results obtained for compression of scintigraphic images were first presented. Then possibilities given by factor analysis for scan processing were discussed. At last, use of this analysis for multidimensional studies and specially dynamic studies were considered for compression and processing [fr
International Nuclear Information System (INIS)
Garvie, R.C.
1985-01-01
A thermodynamic analysis was made of a simple model comprising a transforming t-ZrO 2 microcrystal of size d constrained in a matrix subjected to a hydrostatic tensile stress field. The field generated a critical size range such that a t-particle transformed if dsub(cl) < d < dsub(cu). The lower limit dsub(cl) exists because at this point the maximum energy (supplied by the applied stress) which can be taken up by the crystal is insufficient to drive the transformation. The upper limit dsub(cu) is a consequence of the microcrystal being so large that it transforms spontaneously when the material is cooled to room temperature. Using the thermodynamic (Griffith) approach and assuming that transformation toughening is due to the dilational strain energy, this mechanism accounted for about one-third of the total observed effective surface energy in a peak-aged Ca-PSZ alloy. (author)
Near-surface compressional and shear wave speeds constrained by body-wave polarization analysis
Park, Sunyoung; Ishii, Miaki
2018-06-01
A new technique to constrain near-surface seismic structure that relates body-wave polarization direction to the wave speed immediately beneath a seismic station is presented. The P-wave polarization direction is only sensitive to shear wave speed but not to compressional wave speed, while the S-wave polarization direction is sensitive to both wave speeds. The technique is applied to data from the High-Sensitivity Seismograph Network in Japan, and the results show that the wave speed estimates obtained from polarization analysis are compatible with those from borehole measurements. The lateral variations in wave speeds correlate with geological and physical features such as topography and volcanoes. The technique requires minimal computation resources, and can be used on any number of three-component teleseismic recordings, opening opportunities for non-invasive and inexpensive study of the shallowest (˜100 m) crustal structures.
Druken, Bridget Kinsella
Lesson study, a teacher-led vehicle for inquiring into teacher practice through creating, enacting, and reflecting on collaboratively designed research lessons, has been shown to improve mathematics teacher practice in the United States, such as improving knowledge about mathematics, changing teacher practice, and developing communities of teachers. Though it has been described as a sustainable form of professional development, little research exists on what might support teachers in continuing to engage in lesson study after a grant ends. This qualitative and multi-case study investigates the sustainability of lesson study as mathematics teachers engage in a district scale-up lesson study professional experience after participating in a three-year California Mathematics Science Partnership (CaMSP) grant to improve algebraic instruction. To do so, I first provide a description of material (e.g. curricular materials and time), human (attending district trainings and interacting with mathematics coaches), and social (qualities like trust, shared values, common goals, and expectations developed through relationships with others) resources present in the context of two school districts as reported by participants. I then describe practices of lesson study reported to have continued. I also report on teachers' conceptions of what it means to engage in lesson study. I conclude by describing how these results suggest factors that supported and constrained teachers' in continuing lesson study. To accomplish this work, I used qualitative methods of grounded theory informed by a modified sustainability framework on interview, survey, and case study data about teachers, principals, and Teachers on Special Assignment (TOSAs). Four cases were selected to show the varying levels of lesson study practices that continued past the conclusion of the grant. Analyses reveal varying levels of integration, linkage, and synergy among both formally and informally arranged groups of
Liu, Qiang; Chattopadhyay, Aditi
2000-06-01
Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.
Constraining cosmic scatter in the Galactic halo through a differential analysis of metal-poor stars
Reggiani, Henrique; Meléndez, Jorge; Kobayashi, Chiaki; Karakas, Amanda; Placco, Vinicius
2017-12-01
Context. The chemical abundances of metal-poor halo stars are important to understanding key aspects of Galactic formation and evolution. Aims: We aim to constrain Galactic chemical evolution with precise chemical abundances of metal-poor stars (-2.8 ≤ [Fe/H] ≤ -1.5). Methods: Using high resolution and high S/N UVES spectra of 23 stars and employing the differential analysis technique we estimated stellar parameters and obtained precise LTE chemical abundances. Results: We present the abundances of Li, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Co, Ni, Zn, Sr, Y, Zr, and Ba. The differential technique allowed us to obtain an unprecedented low level of scatter in our analysis, with standard deviations as low as 0.05 dex, and mean errors as low as 0.05 dex for [X/Fe]. Conclusions: By expanding our metallicity range with precise abundances from other works, we were able to precisely constrain Galactic chemical evolution models in a wide metallicity range (-3.6 ≤ [Fe/H] ≤ -0.4). The agreements and discrepancies found are key for further improvement of both models and observations. We also show that the LTE analysis of Cr II is a much more reliable source of abundance for chromium, as Cr I has important NLTE effects. These effects can be clearly seen when we compare the observed abundances of Cr I and Cr II with GCE models. While Cr I has a clear disagreement between model and observations, Cr II is very well modeled. We confirm tight increasing trends of Co and Zn toward lower metallicities, and a tight flat evolution of Ni relative to Fe. Our results strongly suggest inhomogeneous enrichment from hypernovae. Our precise stellar parameters results in a low star-to-star scatter (0.04 dex) in the Li abundances of our sample, with a mean value about 0.4 dex lower than the prediction from standard Big Bang nucleosynthesis; we also study the relation between lithium depletion and stellar mass, but it is difficult to assess a correlation due to the limited mass range. We
Directory of Open Access Journals (Sweden)
A. Alexander Beaujean
2013-02-01
Full Text Available R (R Development Core Team, 2011 is a very powerful tool to analyze data, that is gaining in popularity due to its costs (its free and flexibility (its open-source. This article gives a general introduction to using R (i.e., loading the program, using functions, importing data. Then, using data from Canivez, Konold, Collins, and Wilson (2009, this article walks the user through how to use the program to conduct factor analysis, from both an exploratory and confirmatory approach.
International Nuclear Information System (INIS)
Rasheed, Tahir; Lee, Young-Koo; Lee, Soo Yeol; Kim, Tae-Seong
2009-01-01
Integration of electroencephalography (EEG) and functional magnetic imaging (fMRI) resonance will allow analysis of the brain activities at superior temporal and spatial resolution. However simultaneous acquisition of EEG and fMRI is hindered by the enhancement of artifacts in EEG, the most prominent of which are ballistocardiogram (BCG) and electro-oculogram (EOG) artifacts. The situation gets even worse if the evoked potentials are measured inside MRI for their minute responses in comparison to the spontaneous brain responses. In this study, we propose a new method of attenuating these artifacts from the spontaneous and evoked EEG data acquired inside an MRI scanner using constrained independent component analysis with a priori information about the artifacts as constraints. With the proposed techniques of reference function generation for the BCG and EOG artifacts as constraints, our new approach performs significantly better than the averaged artifact subtraction (AAS) method. The proposed method could be an alternative to the conventional ICA method for artifact attenuation, with some advantages. As a performance measure we have achieved much improved normalized power spectrum ratios (INPS) for continuous EEG and correlation coefficient (cc) values with outside MRI visual evoked potentials for visual evoked EEG, as compared to those obtained with the AAS method. The results show that our new approach is more effective than the conventional methods, almost fully automatic, and no extra ECG signal measurements are involved
Yu, W.; Ai, T.
2014-11-01
Accessibility analysis usually requires special models of spatial location analysis based on some geometric constructions, such as Voronoi diagram (abbreviated to VD). There are many achievements in classic Voronoi model research, however suffering from the following limitations for location-based services (LBS) applications. (1) It is difficult to objectively reflect the actual service areas of facilities by using traditional planar VDs, because human activities in LBS are usually constrained only to the network portion of the planar space. (2) Although some researchers have adopted network distance to construct VDs, their approaches are used in a static environment, where unrealistic measures of shortest path distance based on assumptions about constant travel speeds through the network were often used. (3) Due to the computational complexity of the shortest-path distance calculating, previous researches tend to be very time consuming, especially for large datasets and if multiple runs are required. To solve the above problems, a novel algorithm is developed in this paper. We apply network-based quadrat system and 1-D sequential expansion to find the corresponding subnetwork for each focus. The idea is inspired by the natural phenomenon that water flow extends along certain linear channels until meets others or arrives at the end of route. In order to accommodate the changes in traffic conditions, the length of network-quadrat is set upon the traffic condition of the corresponding street. The method has the advantage over Dijkstra's algorithm in that the time cost is avoided, and replaced with a linear time operation.
CMT: a constrained multi-level thresholding approach for ChIP-Seq data analysis.
Directory of Open Access Journals (Sweden)
Iman Rezaeian
Full Text Available Genome-wide profiling of DNA-binding proteins using ChIP-Seq has emerged as an alternative to ChIP-chip methods. ChIP-Seq technology offers many advantages over ChIP-chip arrays, including but not limited to less noise, higher resolution, and more coverage. Several algorithms have been developed to take advantage of these abilities and find enriched regions by analyzing ChIP-Seq data. However, the complexity of analyzing various patterns of ChIP-Seq signals still needs the development of new algorithms. Most current algorithms use various heuristics to detect regions accurately. However, despite how many formulations are available, it is still difficult to accurately determine individual peaks corresponding to each binding event. We developed Constrained Multi-level Thresholding (CMT, an algorithm used to detect enriched regions on ChIP-Seq data. CMT employs a constraint-based module that can target regions within a specific range. We show that CMT has higher accuracy in detecting enriched regions (peaks by objectively assessing its performance relative to other previously proposed peak finders. This is shown by testing three algorithms on the well-known FoxA1 Data set, four transcription factors (with a total of six antibodies for Drosophila melanogaster and the H3K4ac antibody dataset.
Masalmah, Yahya M.; Vélez-Reyes, Miguel
2007-04-01
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
Constraining processes of landscape change with combined in situ cosmogenic 14C-10Be analysis
Hippe, Kristina
2017-10-01
Reconstructing Quaternary landscape evolution today frequently builds upon cosmogenic-nuclide surface exposure dating. However, the study of complex surface exposure chronologies on the 102-104 years' timescale remains challenging with the commonly used long-lived radionuclides (10Be, 26Al, 36Cl). In glacial settings, key points are the inheritance of nuclides accumulated in a rock surface during a previous exposure episode and (partial) shielding of a rock surface after the main deglaciation event, e.g. during phases of glacier readvance. Combining the short-lived in situ cosmogenic 14C isotope with 10Be dating provides a valuable approach to resolve and quantify complex exposure histories and burial episodes within Lateglacial and Holocene timescales. The first studies applying the in situ14C-10Be pair have demonstrated the great benefit from in situ14C analysis for unravelling complex glacier chronologies in various glacial environments worldwide. Moreover, emerging research on in situ14C in sedimentary systems highlights the capacity of combined in situ14C-10Be analysis to quantify sediment transfer times in fluvial catchments or to constrain changes in surface erosion rates. Nevertheless, further methodological advances are needed to obtain truly routine and widely available in situ14C analysis. Future development in analytical techniques has to focus on improving the analytical reproducibility, reducing the background level and determining more accurate muonic production rates. These improvements should allow extending the field of applications for combined in situ14C-10Be analysis in Earth surface sciences and open up a number of promising applications for dating young sedimentary deposits and the quantification of recent changes in surface erosion dynamics.
Nalette, Ernest
2010-06-01
Constrained practice is routinely encountered by physical therapists and may limit the physical therapist's primary moral responsibility-which is to help the patient to become well again. Ethical practice under such conditions requires a certain moral character of the practitioner. The purposes of this article are: (1) to provide an ethical analysis of a typical patient case of constrained clinical practice, (2) to discuss the moral implications of constrained clinical practice, and (3) to identify key moral principles and virtues fostering ethical physical therapist practice. The case represents a common scenario of discharge planning in acute care health facilities in the northeastern United States. An applied ethics approach was used for case analysis. The decision following analysis of the dilemma was to provide the needed care to the patient as required by compassion, professional ethical standards, and organizational mission. Constrained clinical practice creates a moral dilemma for physical therapists. Being responsive to the patient's needs moves the physical therapist's practice toward the professional ideal of helping vulnerable patients become well again. Meeting the patient's needs is a professional requirement of the physical therapist as moral agent. Acting otherwise requires an alternative position be ethically justified based on systematic analysis of a particular case. Skepticism of status quo practices is required to modify conventional individual, organizational, and societal practices toward meeting the patient's best interest.
CA-Markov Analysis of Constrained Coastal Urban Growth Modeling: Hua Hin Seaside City, Thailand
Directory of Open Access Journals (Sweden)
Rajendra Shrestha
2013-04-01
Full Text Available Thailand, a developing country in Southeast Asia, is experiencing rapid development, particularly urban growth as a response to the expansion of the tourism industry. Hua Hin city provides an excellent example of an area where urbanization has flourished due to tourism. This study focuses on how the dynamic urban horizontal expansion of the seaside city of Hua Hin is constrained by the coast, thus making sustainability for this popular tourist destination—managing and planning for its local inhabitants, its visitors, and its sites—an issue. The study examines the association of land use type and land use change by integrating Geo-Information technology, a statistic model, and CA-Markov analysis for sustainable land use planning. The study identifies that the land use types and land use changes from the year 1999 to 2008 have changed as a result of increased mobility; this trend, in turn, has everything to do with urban horizontal expansion. The changing sequences of land use type have developed from forest area to agriculture, from agriculture to grassland, then to bare land and built-up areas. Coastal urban growth has, for a decade, been expanding horizontally from a downtown center along the beach to the western area around the golf course, the southern area along the beach, the southwest grassland area, and then the northern area near the airport.
Markov chain Monte Carlo analysis to constrain dark matter properties with directional detection
International Nuclear Information System (INIS)
Billard, J.; Mayet, F.; Santos, D.
2011-01-01
Directional detection is a promising dark matter search strategy. Indeed, weakly interacting massive particle (WIMP)-induced recoils would present a direction dependence toward the Cygnus constellation, while background-induced recoils exhibit an isotropic distribution in the Galactic rest frame. Taking advantage of these characteristic features, and even in the presence of a sizeable background, it has recently been shown that data from forthcoming directional detectors could lead either to a competitive exclusion or to a conclusive discovery, depending on the value of the WIMP-nucleon cross section. However, it is possible to further exploit these upcoming data by using the strong dependence of the WIMP signal with: the WIMP mass and the local WIMP velocity distribution. Using a Markov chain Monte Carlo analysis of recoil events, we show for the first time the possibility to constrain the unknown WIMP parameters, both from particle physics (mass and cross section) and Galactic halo (velocity dispersion along the three axis), leading to an identification of non-baryonic dark matter.
Brannon, Sean; Kankelborg, Charles
2017-08-01
Coronal jets typically appear as thin, collimated structures in EUV and X-ray wavelengths, and are understood to be initiated by magnetic reconnection in the lower corona or upper chromosphere. Plasma that is heated and accelerated upward into coronal jets may therefore carry indirect information on conditions in the reconnection region and current sheet located at the jet base. On 2017 October 14, the Interface Region Imaging Spectrograph (IRIS) and Solar Dynamics Observatory Atmospheric Imaging Assembly (SDO/AIA) observed a series of jet eruptions originating from NOAA AR 12599. The jet structure has a length-to-width ratio that exceeds 50, and remains remarkably straight throughout its evolution. Several times during the observation bright blobs of plasma are seen to erupt upward, ascending and subsequently descending along the structure. These blobs are cotemporal with footpoint and arcade brightenings, which we believe indicates multiple episodes of reconnection at the structure base. Through imaging and spectroscopic analysis of jet and footpoint plasma we determine a number of properties, including the line-of-sight inclination, the temperature and density structure, and lift-off velocities and accelerations of jet eruptions. We use these properties to constrain the geometry of the jet structure and conditions in reconnection region.
Modeling and analysis of rotating plates by using self sensing active constrained layer damping
Energy Technology Data Exchange (ETDEWEB)
Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian [Univ. of Macau, Macau (China)
2012-10-15
This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics.
Modeling and analysis of rotating plates by using self sensing active constrained layer damping
International Nuclear Information System (INIS)
Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian
2012-01-01
This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics
Directory of Open Access Journals (Sweden)
MAG Darroch
2014-05-01
Full Text Available The 48 organic-certified members of the Ezemvelo Farmers’ Organisation in KwaZulu-Natal were surveyed during October-November 2004 to assess what factors they perceive constrain the competitiveness of a formal supply chain that markets their amadumbe, potatoes and sweet potatoes. They identified uncertain climate, tractor not available when needed, delays in payments for crops sent to the pack-house, lack of cash and credit to finance inputs, and more work than the family can handle as the current top five constraints. Principal Component Analysis further identified three valid institutional dimensions of perceived constraints and two valid farm-level dimensions. Potential solutions to better manage these constraints are discussed, including the need for the farmers to renegotiate the terms of their incomplete business contract with the pack-house agent.
DEFF Research Database (Denmark)
Kjaerulff, Søren; Andersen, Nicoline Resen; Borup, Mia Trolle
2007-01-01
Eukaryotic cells normally differentiate from G(1); here we investigate the mechanism preventing expression of differentiation-specific genes outside G(1). In fission yeast, induction of the transcription factor Ste11 triggers sexual differentiation. We find that Ste11 is only active in G(1) when...... Cdk activity is low. In the remaining part of the cell cycle, Ste11 becomes Cdk-phosphorylated at Thr 82 (T82), which inhibits its DNA-binding activity. Since the ste11 gene is autoregulated and the Ste11 protein is highly unstable, this Cdk switch rapidly extinguishes Ste11 activity when cells enter...... S phase. When we mutated T82 to aspartic acid, mimicking constant phosphorylation, cells no longer underwent differentiation. Conversely, changing T82 to alanine rendered Ste11-controlled transcription constitutive through the cell cycle, and allowed mating from S phase with increased frequency...
Approximate L0 constrained Non-negative Matrix and Tensor Factorization
DEFF Research Database (Denmark)
Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai
2008-01-01
Non-negative matrix factorization (NMF), i.e. V = WH where both V, W and H are non-negative has become a widely used blind source separation technique due to its part based representation. The NMF decomposition is not in general unique and a part based representation not guaranteed. However...... constraint. In general, solving for a given L0 norm is an NP hard problem thus convex relaxatin to regularization by the L1 norm is often considered, i.e., minimizing ( 1/2 ||V-WHk||^2+lambda|H|_1). An open problem is to control the degree of sparsity imposed. We here demonstrate that a full regularization......, the L1 regularization strength lambda that best approximates a given L0 can be directly accessed and in effect used to control the sparsity of H. The MATLAB code for the NLARS algorithm is available for download....
Directory of Open Access Journals (Sweden)
H. Fang
2018-04-01
Full Text Available Due to the limited spatial resolution of remote hyperspectral sensors, pixels are usually highly mixed in the hyperspectral images. Endmember extraction refers to the process identifying the pure endmember signatures from the mixture, which is an important step towards the utilization of hyperspectral data. Nonnegative matrix factorization (NMF is a widely used method of endmember extraction due to its effectiveness and convenience. While most NMF-based methods have single-layer structures, which may have difficulties in effectively learning the structures of highly mixed and complex data. On the other hand, multilayer algorithms have shown great advantages in learning data features and been widely studied in many fields. In this paper, we presented a L1 sparsityconstrained multilayer NMF method for endmember extraction of highly mixed data. Firstly, the multilayer NMF structure was obtained by unfolding NMF into a certain number of layers. In each layer, the abundance matrix was decomposed into the endmember matrix and abundance matrix of the next layer. Besides, to improve the performance of NMF, we incorporated sparsity constraints to the multilayer NMF model by adding a L1 regularizer of the abundance matrix to each layer. At last, a layer-wise optimization method based on NeNMF was proposed to train the multilayer NMF structure. Experiments were conducted on both synthetic data and real data. The results demonstrate that our proposed algorithm can achieve better results than several state-of-art approaches.
The Accelerating And Constraining Factors Of The Coordinated And Balanced Development Of Regions
Directory of Open Access Journals (Sweden)
Vladimir Stepanovich Bochko
2015-03-01
Full Text Available In the article, the hypothesis that the modern industrial-technological process causes complication of socio-economic space and conducts to amplification its integrity, which, in turn, causes the need for the coordinated and balanced development, is proved. The process of complication of economic space is revealed as a result of number growth of communications caused by creation of the enterprises and organizations, by the change of structure of manufacture and increase of an educational level of the population. The characteristics of a new quality of economic space are given. The factors of the coordinated and balanced development of territories are allocated. The contents «a commercial combination» is shown. The necessity of transition to the system innovation thinking in conditions of becoming complicated economic space is proved. The idea of use «rebalancing of the economy « as a new vision of equation in conditions of crisis situations is offered. The conclusion is made that the result of theoretical and practical searches should become formation vital stability of development of territories, which is provided with intelligence — technological and moral — ethical level of the population, living on it
Lapping, Karin; Frongillo, Edward A; Nguyen, Phuong H; Coates, Jennifer; Webb, Patrick; Menon, Purnima
2014-09-01
Translating national policies and guidelines into effective action at the subnational level (e.g., province or region) is a prerequisite for ensuring an impact on nutrition. In several countries, including Vietnam, the focus of this paper, this process is affected by the quality of the decentralized process of planning and action. This study examined how provincial planning processes for nutrition occurred in Vietnam during 2009 and 2010. Key goals were to understand variability in processes across provinces, identify factors that influenced the process, and assess the usefulness of the process for individuals involved in planning and action. A qualitative case-study methodology was used. Data were drawn from interviews with 51 government officials in eight provinces. The study found little variability in the planning process among these eight provinces, probably due to a planning process that was predominantly a fiscal exercise within the confines of a largely centralized structure. Respondents were almost unanimous about the main barriers: a top-down approach to planning, limited human capacity for effective planning at subnational levels, and difficulty in integrating actions from multiple sectors. Provincial-level actors were deeply dissatisfied with the nature of their role in the process. Despite the rhetoric to the contrary, too much power is probably still retained at the central level. A strategic multiyear approach is needed to strengthen the provincial planning process and address many of the key barriers identified in this study.
Energy Security Analysis: The case of constrained oil supply for Ireland
International Nuclear Information System (INIS)
Glynn, James; Chiodi, Alessandro; Gargiulo, Maurizio; Deane, J.P.; Bazilian, Morgan; Gallachóir, Brian Ó
2014-01-01
Ireland imports 88% of its energy requirements. Oil makes up 59% of total final energy consumption (TFC). Import dependency, low fuel diversity and volatile prices leave Ireland vulnerable in terms of energy security. This work models energy security scenarios for Ireland using long term macroeconomic forecasts to 2050, with oil production and price scenarios from the International Monetary Fund, within the Irish TIMES energy systems model. The analysis focuses on developing a least cost optimum energy system for Ireland under scenarios of constrained oil supply (0.8% annual import growth, and –2% annual import decline) and subsequent sustained long term price shocks to oil and gas imports. The results point to gas becoming the dominant fuel source for Ireland, at 54% total final energy consumption in 2020, supplanting oil from reference projections of 57% to 10.8% TFC. In 2012, the cost of net oil imports stood at €3.6 billion (2.26% GDP). The modelled high oil and gas price scenarios show an additional annual cost in comparison to a reference of between €2.9bn and €7.5bn by 2020 (1.9–4.9% of GDP) to choose to develop a least cost energy system. Investment and ramifications for energy security are discussed. - Highlights: • We investigate energy security within a techno-economic model of Ireland to 2050. • We impose scenarios constraints of volume and price derived from IMF forecasting. • Continued high oil prices lead to natural gas supplanting oil at 54% TFC by 2020. • Declining oil production induces additional energy system costs of 7.9% GDP by 2020. • High oil and gas prices are likely to strain existing Irish gas import infrastructure
Does skull morphology constrain bone ornamentation? A morphometric analysis in the Crocodylia.
Clarac, F; Souter, T; Cubo, J; de Buffrénil, V; Brochu, C; Cornette, R
2016-08-01
Previous quantitative assessments of the crocodylians' dermal bone ornamentation (this ornamentation consists of pits and ridges) has shown that bone sculpture results in a gain in area that differs between anatomical regions: it tends to be higher on the skull table than on the snout. Therefore, a comparative phylogenetic analysis within 17 adult crocodylian specimens representative of the morphological diversity of the 24 extant species has been performed, in order to test if the gain in area due to ornamentation depends on the skull morphology, i.e. shape and size. Quantitative assessment of skull size and shape through geometric morphometrics, and of skull ornamentation through surface analyses, produced a dataset that was analyzed using phylogenetic least-squares regression. The analyses reveal that none of the variables that quantify ornamentation, be they on the snout or the skull table, is correlated with the size of the specimens. Conversely, there is more disparity in the relationships between skull conformations (longirostrine vs. brevirostrine) and ornamentation. Indeed, both parameters GApit (i.e. pit depth and shape) and OArelat (i.e. relative area of the pit set) are negatively correlated with snout elongation, whereas none of the values quantifying ornamentation on the skull table is correlated with skull conformation. It can be concluded that bone sculpture on the snout is influenced by different developmental constrains than on the skull table and is sensible to differences in the local growth 'context' (allometric processes) prevailing in distinct skull parts. Whatever the functional role of bone ornamentation on the skull, if any, it seems to be restricted to some anatomical regions at least for the longirostrine forms that tend to lose ornamentation on the snout. © 2016 Anatomical Society.
Factors affecting construction performance: exploratory factor analysis
Soewin, E.; Chinda, T.
2018-04-01
The present work attempts to develop a multidimensional performance evaluation framework for a construction company by considering all relevant measures of performance. Based on the previous studies, this study hypothesizes nine key factors, with a total of 57 associated items. The hypothesized factors, with their associated items, are then used to develop questionnaire survey to gather data. The exploratory factor analysis (EFA) was applied to the collected data which gave rise 10 factors with 57 items affecting construction performance. The findings further reveal that the items constituting ten key performance factors (KPIs) namely; 1) Time, 2) Cost, 3) Quality, 4) Safety & Health, 5) Internal Stakeholder, 6) External Stakeholder, 7) Client Satisfaction, 8) Financial Performance, 9) Environment, and 10) Information, Technology & Innovation. The analysis helps to develop multi-dimensional performance evaluation framework for an effective measurement of the construction performance. The 10 key performance factors can be broadly categorized into economic aspect, social aspect, environmental aspect, and technology aspects. It is important to understand a multi-dimension performance evaluation framework by including all key factors affecting the construction performance of a company, so that the management level can effectively plan to implement an effective performance development plan to match with the mission and vision of the company.
Analysis of neutron and x-ray reflectivity data by constrained least-squares methods
DEFF Research Database (Denmark)
Pedersen, J.S.; Hamley, I.W.
1994-01-01
. The coefficients in the series are determined by constrained nonlinear least-squares methods, in which the smoothest solution that agrees with the data is chosen. In the second approach the profile is expressed as a series of sine and cosine terms. A smoothness constraint is used which reduces the coefficients...
Abaka, Gamze; Bıyıkoğlu, Türker; Erten, Cesim
2013-07-01
Given a pair of metabolic pathways, an alignment of the pathways corresponds to a mapping between similar substructures of the pair. Successful alignments may provide useful applications in phylogenetic tree reconstruction, drug design and overall may enhance our understanding of cellular metabolism. We consider the problem of providing one-to-many alignments of reactions in a pair of metabolic pathways. We first provide a constrained alignment framework applicable to the problem. We show that the constrained alignment problem even in a primitive setting is computationally intractable, which justifies efforts for designing efficient heuristics. We present our Constrained Alignment of Metabolic Pathways (CAMPways) algorithm designed for this purpose. Through extensive experiments involving a large pathway database, we demonstrate that when compared with a state-of-the-art alternative, the CAMPways algorithm provides better alignment results on metabolic networks as far as measures based on same-pathway inclusion and biochemical significance are concerned. The execution speed of our algorithm constitutes yet another important improvement over alternative algorithms. Open source codes, executable binary, useful scripts, all the experimental data and the results are freely available as part of the Supplementary Material at http://code.google.com/p/campways/. Supplementary data are available at Bioinformatics online.
Factor analysis of multivariate data
Digital Repository Service at National Institute of Oceanography (India)
Fernandes, A.A.; Mahadevan, R.
A brief introduction to factor analysis is presented. A FORTRAN program, which can perform the Q-mode and R-mode factor analysis and the singular value decomposition of a given data matrix is presented in Appendix B. This computer program, uses...
Directory of Open Access Journals (Sweden)
Zhensheng Wang
2017-02-01
Full Text Available The spatial variation of geographical phenomena is a classical problem in spatial data analysis and can provide insight into underlying processes. Traditional exploratory methods mostly depend on the planar distance assumption, but many spatial phenomena are constrained to a subset of Euclidean space. In this study, we apply a method based on a hierarchical Bayesian model to analyse the spatial variation of network-constrained phenomena represented by a link attribute in conjunction with two experiments based on a simplified hypothetical network and a complex road network in Shenzhen that includes 4212 urban facility points of interest (POIs for leisure activities. Then, the methods named local indicators of network-constrained clusters (LINCS are applied to explore local spatial patterns in the given network space. The proposed method is designed for phenomena that are represented by attribute values of network links and is capable of removing part of random variability resulting from small-sample estimation. The effects of spatial dependence and the base distribution are also considered in the proposed method, which could be applied in the fields of urban planning and safety research.
Performance Analysis of Constrained Loosely Coupled GPS/INS Integration Solutions
Directory of Open Access Journals (Sweden)
Fabio Dovis
2012-11-01
Full Text Available The paper investigates approaches for loosely coupled GPS/INS integration. Error performance is calculated using a reference trajectory. A performance improvement can be obtained by exploiting additional map information (for example, a road boundary. A constrained solution has been developed and its performance compared with an unconstrained one. The case of GPS outages is also investigated showing how a Kalman filter that operates on the last received GPS position and velocity measurements provides a performance benefit. Results are obtained by means of simulation studies and real data.
Bompard, E.; Ma, Y. C.; Ragazzi, E.
2006-03-01
Competition has been introduced in the electricity markets with the goal of reducing prices and improving efficiency. The basic idea which stays behind this choice is that, in competitive markets, a greater quantity of the good is exchanged at a lower price, leading to higher market efficiency. Electricity markets are pretty different from other commodities mainly due to the physical constraints related to the network structure that may impact the market performance. The network structure of the system on which the economic transactions need to be undertaken poses strict physical and operational constraints. Strategic interactions among producers that game the market with the objective of maximizing their producer surplus must be taken into account when modeling competitive electricity markets. The physical constraints, specific of the electricity markets, provide additional opportunity of gaming to the market players. Game theory provides a tool to model such a context. This paper discussed the application of game theory to physical constrained electricity markets with the goal of providing tools for assessing the market performance and pinpointing the critical network constraints that may impact the market efficiency. The basic models of game theory specifically designed to represent the electricity markets will be presented. IEEE30 bus test system of the constrained electricity market will be discussed to show the network impacts on the market performances in presence of strategic bidding behavior of the producers.
Energy Technology Data Exchange (ETDEWEB)
Bompard, E.; Ma, Y.C. [Politecnico di Torino, Dept. of Electrical Engineering, Torino (Italy); Ragazzi, E. [CERIS, Institute for Economic Research on Firms and Growth, CNR, National Research Council, Moncalieri, TO (Italy)
2006-03-15
Competition has been introduced in the electricity markets with the goal of reducing prices and improving efficiency. The basic idea which stays behind this choice is that, in competitive markets, a greater quantity of the good is exchanged at a lower price, leading to higher market efficiency. Electricity markets are pretty different from other commodities mainly due to the physical constraints related to the network structure that may impact the market performance. The network structure of the system on which the economic transactions needs to be undertaken poses strict physical and operational constraints. Strategic interactions among producers that game the market with the objective of maximizing their producer surplus must be taken into account when modeling competitive electricity markets. The physical constraints, specific of the electricity markets, provide additional opportunity of gaming to the market players. Game theory provides a tool to model such a context. This paper discussed the application of game theory to physical constrained electricity markets with the goal of providing tools for assessing the market performance and pinpointing the critical network constraints that may impact the market efficiency. The basic models of game theory specifically designed to represent the electricity markets will be presented. IEEE30 bus test system of the constrained electricity market will be discussed to show the network impacts on the market performances in presence of strategic bidding behavior of the producers. (authors)
CSIR Research Space (South Africa)
Britz, K
2011-09-01
Full Text Available their basic properties and relationship. In Section 3 we present a modal instance of these constructions which also illustrates with an example how to reason abductively with constrained entailment in a causal or action oriented context. In Section 4 we... of models with the former approach, whereas in Section 3.3 we give an example illustrating ways in which C can be de ned with both. Here we employ the following versions of local consequence: De nition 3.4. Given a model M = hW;R;Vi and formulas...
Kuwata, Yoshiaki; Pavone, Marco; Balaram, J. (Bob)
2012-01-01
This paper presents a novel risk-constrained multi-stage decision making approach to the architectural analysis of planetary rover missions. In particular, focusing on a 2018 Mars rover concept, which was considered as part of a potential Mars Sample Return campaign, we model the entry, descent, and landing (EDL) phase and the rover traverse phase as four sequential decision-making stages. The problem is to find a sequence of divert and driving maneuvers so that the rover drive is minimized and the probability of a mission failure (e.g., due to a failed landing) is below a user specified bound. By solving this problem for several different values of the model parameters (e.g., divert authority), this approach enables rigorous, accurate and systematic trade-offs for the EDL system vs. the mobility system, and, more in general, cross-domain trade-offs for the different phases of a space mission. The overall optimization problem can be seen as a chance-constrained dynamic programming problem, with the additional complexity that 1) in some stages the disturbances do not have any probabilistic characterization, and 2) the state space is extremely large (i.e, hundreds of millions of states for trade-offs with high-resolution Martian maps). To this purpose, we solve the problem by performing an unconventional combination of average and minimax cost analysis and by leveraging high efficient computation tools from the image processing community. Preliminary trade-off results are presented.
Directory of Open Access Journals (Sweden)
Nurullah Emir Ekinci
2014-10-01
Full Text Available The purpose of this study was to analyze, which recreational sport or non- sport such as cultural/ art activities that university students prefer in their leisure time and underlying reasons that constrains participating in these activities with regard to different variables. Randomly chosen 339 students from The Faculty of Arts and Faculty of Sciences and Engineering at University of Dumlupiınar volunteered for the study. In this research as a data collection tool “Leisure Constraint Scale” was used. During the evaluation of the data in addition to descriptive statistical methods such as Percentage (% and frequency (f Independent Samples t-test and One way Anova were used. As a result it was found that 19.2% participants choose recreational sport activities in their leisure time. In addition, significant differences have emerged between participants’ gender and constrains to leisure in "lack of information", "lack of friends" and "time" sub-dimensions, between age and barriers to leisure in "time" sub-dimension, and between average monthly income levels and constrains to leisure in "individual psychology" and "facilities / services" sub dimensions (p <0.05. But no significant differences were found according to activities that they choose in their leisure time.
First course in factor analysis
Comrey, Andrew L
2013-01-01
The goal of this book is to foster a basic understanding of factor analytic techniques so that readers can use them in their own research and critically evaluate their use by other researchers. Both the underlying theory and correct application are emphasized. The theory is presented through the mathematical basis of the most common factor analytic models and several methods used in factor analysis. On the application side, considerable attention is given to the extraction problem, the rotation problem, and the interpretation of factor analytic results. Hence, readers are given a background of
Constraining the shape of the CMB: A peak-by-peak analysis
International Nuclear Information System (INIS)
Oedman, Carolina J.; Hobson, Michael P.; Lasenby, Anthony N.; Melchiorri, Alessandro
2003-01-01
The recent measurements of the power spectrum of cosmic microwave background anisotropies are consistent with the simplest inflationary scenario and big bang nucleosynthesis constraints. However, these results rely on the assumption of a class of models based on primordial adiabatic perturbations, cold dark matter and a cosmological constant. In this paper we investigate the need for deviations from the Λ-CDM scenario by first characterizing the spectrum using a phenomenological function in a 15 dimensional parameter space. Using a Monte Carlo Markov chain approach to Bayesian inference and a low curvature model template we then check for the presence of new physics and/or systematics in the CMB data. We find an almost perfect consistency between the phenomenological fits and the standard Λ-CDM models. The curvature of the secondary peaks is weakly constrained by the present data, but they are well located. The improved spectral resolution expected from future satellite experiments is warranted for a definitive test of the scenario
International Nuclear Information System (INIS)
Wu, Dufan; Li, Liang; Zhang, Li
2013-01-01
In computed tomography (CT), incomplete data problems such as limited angle projections often cause artifacts in the reconstruction results. Additional prior knowledge of the image has shown the potential for better results, such as a prior image constrained compressed sensing algorithm. While a pre-full-scan of the same patient is not always available, massive well-reconstructed images of different patients can be easily obtained from clinical multi-slice helical CTs. In this paper, a feature constrained compressed sensing (FCCS) image reconstruction algorithm was proposed to improve the image quality by using the prior knowledge extracted from the clinical database. The database consists of instances which are similar to the target image but not necessarily the same. Robust principal component analysis is employed to retrieve features of the training images to sparsify the target image. The features form a low-dimensional linear space and a constraint on the distance between the image and the space is used. A bi-criterion convex program which combines the feature constraint and total variation constraint is proposed for the reconstruction procedure and a flexible method is adopted for a good solution. Numerical simulations on both the phantom and real clinical patient images were taken to validate our algorithm. Promising results are shown for limited angle problems. (paper)
Lithuanian Population Aging Factors Analysis
Directory of Open Access Journals (Sweden)
Agnė Garlauskaitė
2015-05-01
Full Text Available The aim of this article is to identify the factors that determine aging of Lithuania’s population and to assess the influence of these factors. The article shows Lithuanian population aging factors analysis, which consists of two main parts: the first describes the aging of the population and its characteristics in theoretical terms. Second part is dedicated to the assessment of trends that influence the aging population and demographic factors and also to analyse the determinants of the aging of the population of Lithuania. After analysis it is concluded in the article that the decline in the birth rate and increase in the number of emigrants compared to immigrants have the greatest impact on aging of the population, so in order to show the aging of the population, a lot of attention should be paid to management of these demographic processes.
Factor Analysis for Clustered Observations.
Longford, N. T.; Muthen, B. O.
1992-01-01
A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)
Transforming Rubrics Using Factor Analysis
Baryla, Ed; Shelley, Gary; Trainor, William
2012-01-01
Student learning and program effectiveness is often assessed using rubrics. While much time and effort may go into their creation, it is equally important to assess how effective and efficient the rubrics actually are in terms of measuring competencies over a number of criteria. This study demonstrates the use of common factor analysis to identify…
Giulio PALOMBA
2006-01-01
In a typical tactical asset allocation set up managers generally make their investment decisions by inserting private information in an optimisation mechanism used to beat a benchmark portfolio; in this context the sole approach a' la Markowitz (1959) does not use all the available information about expected excess return and especially it does not take two main factors into account: first, asset returns often show changes in volatility, and second, the manager's private information plays no ...
Gangadharan, Sridhar
2013-01-01
This book serves as a hands-on guide to timing constraints in integrated circuit design. Readers will learn to maximize performance of their IC designs, by specifying timing requirements correctly. Coverage includes key aspects of the design flow impacted by timing constraints, including synthesis, static timing analysis and placement and routing. Concepts needed for specifying timing requirements are explained in detail and then applied to specific stages in the design flow, all within the context of Synopsys Design Constraints (SDC), the industry-leading format for specifying constraints. · Provides a hands-on guide to synthesis and timing analysis, using Synopsys Design Constraints (SDC), the industry-leading format for specifying constraints; · Includes key topics of interest to a synthesis, static timing analysis or place and route engineer; · Explains which constraints command to use for ease of maintenance and reuse, given several options pos...
Directory of Open Access Journals (Sweden)
S. Prasanna
2014-03-01
Full Text Available Most of the e-commerce and m-commerce applications in the current e-business world, has adopted asymmetric key cryptography technique in their authentication protocol to provide an efficient authentication of the involved parties. This paper exhibits the performance analysis of distinct authentication protocol which implements the public key cryptography like RSA, ECC and HECC. The comparison is made based on key generation, sign generation and sign verification processes. The results prove that the performance achieved through HECC based authentication protocol is better than the ECC- and RSA based authentication protocols.
Directory of Open Access Journals (Sweden)
William I. Sellers
2017-07-01
Full Text Available The running ability of Tyrannosaurus rex has been intensively studied due to its relevance to interpretations of feeding behaviour and the biomechanics of scaling in giant predatory dinosaurs. Different studies using differing methodologies have produced a very wide range of top speed estimates and there is therefore a need to develop techniques that can improve these predictions. Here we present a new approach that combines two separate biomechanical techniques (multibody dynamic analysis and skeletal stress analysis to demonstrate that true running gaits would probably lead to unacceptably high skeletal loads in T. rex. Combining these two approaches reduces the high-level of uncertainty in previous predictions associated with unknown soft tissue parameters in dinosaurs, and demonstrates that the relatively long limb segments of T. rex—long argued to indicate competent running ability—would actually have mechanically limited this species to walking gaits. Being limited to walking speeds contradicts arguments of high-speed pursuit predation for the largest bipedal dinosaurs like T. rex, and demonstrates the power of multiphysics approaches for locomotor reconstructions of extinct animals.
Modeling and Economic Analysis of Power Grid Operations in a Water Constrained System
Zhou, Z.; Xia, Y.; Veselka, T.; Yan, E.; Betrie, G.; Qiu, F.
2016-12-01
The power sector is the largest water user in the United States. Depending on the cooling technology employed at a facility, steam-electric power stations withdrawal and consume large amounts of water for each megawatt hour of electricity generated. The amounts are dependent on many factors, including ambient air and water temperatures, cooling technology, etc. Water demands from most economic sectors are typically highest during summertime. For most systems, this coincides with peak electricity demand and consequently a high demand for thermal power plant cooling water. Supplies however are sometimes limited due to seasonal precipitation fluctuations including sporadic droughts that lead to water scarcity. When this occurs there is an impact on both unit commitments and the real-time dispatch. In this work, we model the cooling efficiency of several different types of thermal power generation technologies as a function of power output level and daily temperature profiles. Unit specific relationships are then integrated in a power grid operational model that minimizes total grid production cost while reliably meeting hourly loads. Grid operation is subject to power plant physical constraints, transmission limitations, water availability and environmental constraints such as power plant water exit temperature limits. The model is applied to a standard IEEE-118 bus system under various water availability scenarios. Results show that water availability has a significant impact on power grid economics.
Shen, I. Y.
1997-02-01
This paper studies vibration control of a shell structure through use of an active constrained layer (ACL) damping treatment. A deep-shell theory that assumes arbitrary Lamé parameters 0964-1726/6/1/011/img1 and 0964-1726/6/1/011/img2 is first developed. Application of Hamilton's principle leads to the governing Love equations, the charge equation of electrostatics, and the associated boundary conditions. The Love equations and boundary conditions imply that the control action of the ACL for shell treatments consists of two components: free-end boundary actuation and membrane actuation. The free-end boundary actuation is identical to that of beam and plate ACL treatments, while the membrane actuation is unique to shell treatments as a result of the curvatures of the shells. In particular, the membrane actuation may reinforce or counteract the boundary actuation, depending on the location of the ACL treatment. Finally, an energy analysis is developed to determine the proper control law that guarantees the stability of ACL shell treatments. Moreover, the energy analysis results in a simple rule predicting whether or not the membrane actuation reinforces the boundary actuation.
A multi-fidelity analysis selection method using a constrained discrete optimization formulation
Stults, Ian C.
The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model
Time series analysis of Mexico City subsidence constrained by radar interferometry
Doin, Marie-Pierre; Lopez-Quiroz, Penelope; Yan, Yajing; Bascou, Pascale; Pinel, Virginie
2010-05-01
unwrapping errors for each pixel and show that they are strongly decreased by iterations in the unwrapping process. (3) Finally, we present a new algorithm for time series analysis that differs from classical SVD decomposition and is best suited to the present data base. Accurate deformation time series are then derived over the metropolitan area of the city with a spatial resolution of 30 × 30 m. We also use the Gamma-PS software on the same data set. The phase differences are unwrapped within small patches with respect to a reference point chosen in each patch, whose phase is in turn unwrapped relatively to a reference point common for the whole area of interest. After removing the modelled contribution of the linear displacement rate and DEM error, some residual interferograms, presenting unwrapping errors because of strong residual orbital ramp or atmospheric phase screen, are spatially unwrapped by a minimum cost-flow algorithm. The next steps are to estimate and remove the residual orbital ramp and to apply temporal low-pass filter to remove atmospheric contributions. The step by step comparison of the SBAS and PS approaches shows both methods complementarity. The SBAS analysis provide subsidence rates with an accuracy of a mm/yr over the whole basin in a large area, together with the subsidence non linear behavior through time, however at the expense of some spatial regularization. The PS method provides locally accurate and punctual deformation rates, but fails in this case to yield a good large scale map and the non linear temporal behavior of the subsidence. We conclude that the relative contrast in subsidence between individual buildings and infrastructure must be relatively small, on average of the order of 5mm/yr.
International Nuclear Information System (INIS)
Ivanenko, V.N.; Zybin, V.A.
1988-01-01
In this paper the different ways of development for the BN-800 maximum credible accident in case of loss and fire of primary sodium are examined. The more constraining scenario is presented. During the scenario analysis the accidental release of radioactive materials in the environment has been studied. These releases are below the authorized values [fr
Siemann, Julia; Herrmann, Manfred; Galashan, Daniela
2018-01-25
The present study examined whether feature-based cueing affects early or late stages of flanker conflict processing using EEG and fMRI. Feature cues either directed participants' attention to the upcoming colour of the target or were neutral. Validity-specific modulations during interference processing were investigated using the N200 event-related potential (ERP) component and BOLD signal differences. Additionally, both data sets were integrated using an fMRI-constrained source analysis. Finally, the results were compared with a previous study in which spatial instead of feature-based cueing was applied to an otherwise identical flanker task. Feature-based and spatial attention recruited a common fronto-parietal network during conflict processing. Irrespective of attention type (feature-based; spatial), this network responded to focussed attention (valid cueing) as well as context updating (invalid cueing), hinting at domain-general mechanisms. However, spatially and non-spatially directed attention also demonstrated domain-specific activation patterns for conflict processing that were observable in distinct EEG and fMRI data patterns as well as in the respective source analyses. Conflict-specific activity in visual brain regions was comparable between both attention types. We assume that the distinction between spatially and non-spatially directed attention types primarily applies to temporal differences (domain-specific dynamics) between signals originating in the same brain regions (domain-general localization).
Directory of Open Access Journals (Sweden)
Xiang Chen
2015-01-01
Full Text Available Singularity is an inherent characteristic of parallel robots and is also a typical mathematical problem in engineering application. In general, to identify singularity configuration, the singular solution in mathematics should be derived. This work introduces an alternative approach to the singularity identification of lower-mobility parallel robots considering the motion/force transmissibility and constrainability. The theory of screws is used as the mathematic tool to define the transmission and constraint indices of parallel robots. The singularity is hereby classified into four types concerning both input and output members of a parallel robot, that is, input transmission singularity, output transmission singularity, input constraint singularity, and output constraint singularity. Furthermore, we take several typical parallel robots as examples to illustrate the process of singularity analysis. Particularly, the input and output constraint singularities which are firstly proposed in this work are depicted in detail. The results demonstrate that the method can not only identify all possible singular configurations, but also explain their physical meanings. Therefore, the proposed approach is proved to be comprehensible and effective in solving singularity problems in parallel mechanisms.
An easy guide to factor analysis
Kline, Paul
2014-01-01
Factor analysis is a statistical technique widely used in psychology and the social sciences. With the advent of powerful computers, factor analysis and other multivariate methods are now available to many more people. An Easy Guide to Factor Analysis presents and explains factor analysis as clearly and simply as possible. The author, Paul Kline, carefully defines all statistical terms and demonstrates step-by-step how to work out a simple example of principal components analysis and rotation. He further explains other methods of factor analysis, including confirmatory and path analysis, a
Hartman, Brett D; Cleveland, David A
2018-03-01
Restoration ecology holds promise for addressing land degradation in impoverished rural environments, provided the approach is adapted to rural development settings. While there is a need for increased integration of social dynamics in land restoration, few systematic studies exist. We explored the socioeconomic factors that influence restoration management, including local motives and perceived benefits, incentives, land tenancy, institutional factors, conflict resolution, accessibility, off-farm labor, and outmigration. The study area is a successful watershed rehabilitation and wet meadow restoration project in the Bolivian Andes that began in 1992. We used household survey methods (n = 237) to compare the communities that had conducted the most restoration management with those that had conducted the least. Results suggest that several factors facilitate investments in land restoration, including aligning restoration objectives with local motives and perceived benefits, ensuring incentives are in place to stimulate long-term investments, conflict resolution, private land tenancy, and accessibility. However, higher levels of organization and active leadership can facilitate land restoration on communal lands. Increased livelihood benefits from land restoration helped slow the rate of rural to urban migration, with 24.5% outmigration in the highest restoration management communities compared to 62.1% in the lowest restoration management communities. Results suggest that land restoration projects that integrate community development into project planning and implementation will achieve greater success. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hao, Xiaoke; Li, Chanxiu; Yan, Jingwen; Yao, Xiaohui; Risacher, Shannon L; Saykin, Andrew J; Shen, Li; Zhang, Daoqiang
2017-07-15
Neuroimaging genetics identifies the relationships between genetic variants (i.e., the single nucleotide polymorphisms) and brain imaging data to reveal the associations from genotypes to phenotypes. So far, most existing machine-learning approaches are widely used to detect the effective associations between genetic variants and brain imaging data at one time-point. However, those associations are based on static phenotypes and ignore the temporal dynamics of the phenotypical changes. The phenotypes across multiple time-points may exhibit temporal patterns that can be used to facilitate the understanding of the degenerative process. In this article, we propose a novel temporally constrained group sparse canonical correlation analysis (TGSCCA) framework to identify genetic associations with longitudinal phenotypic markers. The proposed TGSCCA method is able to capture the temporal changes in brain from longitudinal phenotypes by incorporating the fused penalty, which requires that the differences between two consecutive canonical weight vectors from adjacent time-points should be small. A new efficient optimization algorithm is designed to solve the objective function. Furthermore, we demonstrate the effectiveness of our algorithm on both synthetic and real data (i.e., the Alzheimer's Disease Neuroimaging Initiative cohort, including progressive mild cognitive impairment, stable MCI and Normal Control participants). In comparison with conventional SCCA, our proposed method can achieve strong associations and discover phenotypic biomarkers across multiple time-points to guide disease-progressive interpretation. The Matlab code is available at https://sourceforge.net/projects/ibrain-cn/files/ . dqzhang@nuaa.edu.cn or shenli@iu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Czech Academy of Sciences Publication Activity Database
Collett, S.; Štípská, P.; Kusbach, Vladimír; Schulmann, K.; Marciniak, G.
2017-01-01
Roč. 35, č. 3 (2017), s. 253-280 ISSN 0263-4929 Institutional support: RVO:67985530 Keywords : eclogite * Bohemian Massif * thermodynamic modelling * micro-fabric analysis * subduction and exhumation dynamics Subject RIV: DB - Geology ; Mineralogy OBOR OECD: Geology Impact factor: 3.594, year: 2016
Dorman, Michael F; Cook, Sarah; Spahr, Anthony; Zhang, Ting; Loiselle, Louise; Schramm, David; Whittingham, JoAnne; Gifford, Rene
2015-04-01
Many studies have documented the benefits to speech understanding when cochlear implant (CI) patients can access low-frequency acoustic information from the ear opposite the implant. In this study we assessed the role of three factors in determining the magnitude of bimodal benefit - (i) the level of CI-only performance, (ii) the magnitude of the hearing loss in the ear with low-frequency acoustic hearing and (iii) the type of test material. The patients had low-frequency PTAs (average of 125, 250 and 500 Hz) varying over a large range (70 dB HL) in the ear contralateral to the implant. The patients were tested with (i) CNC words presented in quiet (n = 105) (ii) AzBio sentences presented in quiet (n = 102), (iii) AzBio sentences in noise at +10 dB signal-to-noise ratio (SNR) (n = 69), and (iv) AzBio sentences at +5 dB SNR (n = 64). We find maximum bimodal benefit when (i) CI scores are less than 60 percent correct, (ii) hearing loss is less than 60 dB HL in low-frequencies and (iii) the test material is sentences presented against a noise background. When these criteria are met, some bimodal patients can gain 40-60 percentage points in performance relative to performance with a CI. This article is part of a Special Issue entitled . Copyright © 2014 Elsevier B.V. All rights reserved.
Baloković, M.; Brightman, M.; Harrison, F. A.; Comastri, A.; Ricci, C.; Buchner, J.; Gandhi, P.; Farrah, D.; Stern, D.
2018-02-01
The basic unified model of active galactic nuclei (AGNs) invokes an anisotropic obscuring structure, usually referred to as a torus, to explain AGN obscuration as an angle-dependent effect. We present a new grid of X-ray spectral templates based on radiative transfer calculations in neutral gas in an approximately toroidal geometry, appropriate for CCD-resolution X-ray spectra (FWHM ≥ 130 eV). Fitting the templates to broadband X-ray spectra of AGNs provides constraints on two important geometrical parameters of the gas distribution around the supermassive black hole: the average column density and the covering factor. Compared to the currently available spectral templates, our model is more flexible, and capable of providing constraints on the main torus parameters in a wider range of AGNs. We demonstrate the application of this model using hard X-ray spectra from NuSTAR (3–79 keV) for four AGNs covering a variety of classifications: 3C 390.3, NGC 2110, IC 5063, and NGC 7582. This small set of examples was chosen to illustrate the range of possible torus configurations, from disk-like to sphere-like geometries with column densities below, as well as above, the Compton-thick threshold. This diversity of torus properties challenges the simple assumption of a standard geometrically and optically thick toroidal structure commonly invoked in the basic form of the unified model of AGNs. Finding broad consistency between our constraints and those from infrared modeling, we discuss how the approach from the X-ray band complements similar measurements of AGN structures at other wavelengths.
International Nuclear Information System (INIS)
Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua
2017-01-01
An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model changes when the measurement branches of the sensor are under tensile or compressive force. This study establishes a general measurement model for an over-constrained parallel six-dimensional force sensor considering the different branch tensions and compression stiffness values. Numerical calculations and analyses are performed using practical examples. Based on the parallel mechanism, an over-constrained, orthogonal structure is proposed for a six-dimensional force sensor. Hence, a prototype is designed and developed, and a calibration experiment is conducted. The measurement accuracy of the sensor is improved based on the measurement model under different branch tensions and compression stiffness values. Moreover, the largest class I error is reduced from 5.81 to 2.23% full scale (FS), and the largest class II error is reduced from 3.425 to 1.871% FS. (paper)
Mozaffar, A.; Schoon, N.; Digrado, A.; Bachy, A.; Delaplace, P.; du Jardin, P.; Fauconnier, M.-L.; Aubinet, M.; Heinesch, B.; Amelynck, C.
2017-03-01
Because of its high abundance and long lifetime compared to other volatile organic compounds in the atmosphere, methanol (CH3OH) plays an important role in atmospheric chemistry. Even though agricultural crops are believed to be a large source of methanol, emission inventories from those crop ecosystems are still scarce and little information is available concerning the driving mechanisms for methanol production and emission at different developmental stages of the plants/leaves. This study focuses on methanol emissions from Zea mays L. (maize), which is vastly cultivated throughout the world. Flux measurements have been performed on young plants, almost fully grown leaves and fully grown leaves, enclosed in dynamic flow-through enclosures in a temperature and light-controlled environmental chamber. Strong differences in the response of methanol emissions to variations in PPFD (Photosynthetic Photon Flux Density) were noticed between the young plants, almost fully grown and fully grown leaves. Moreover, young maize plants showed strong emission peaks following light/dark transitions, for which guttation can be put forward as a hypothetical pathway. Young plants' average daily methanol fluxes exceeded by a factor of 17 those of almost fully grown and fully grown leaves when expressed per leaf area. Absolute flux values were found to be smaller than those reported in the literature, but in fair agreement with recent ecosystem scale flux measurements above a maize field of the same variety as used in this study. The flux measurements in the current study were used to evaluate the dynamic biogenic volatile organic compound (BVOC) emission model of Niinemets and Reichstein. The modelled and measured fluxes from almost fully grown leaves were found to agree best when a temperature and light dependent methanol production function was applied. However, this production function turned out not to be suitable for modelling the observed emissions from the young plants
Guyen, Olivier; Lewallen, David G; Cabanela, Miguel E
2008-07-01
The Osteonics constrained tripolar implant has been one of the most commonly used options to manage recurrent instability after total hip arthroplasty. Mechanical failures were expected and have been reported. The purpose of this retrospective review was to identify the observed modes of failure of this device. Forty-three failed Osteonics constrained tripolar implants were revised at our institution between September 1997 and April 2005. All revisions related to the constrained acetabular component only were considered as failures. All of the devices had been inserted for recurrent or intraoperative instability during revision procedures. Seven different methods of implantation were used. Operative reports and radiographs were reviewed to identify the modes of failure. The average time to failure of the forty-three implants was 28.4 months. A total of five modes of failure were observed: failure at the bone-implant interface (type I), which occurred in eleven hips; failure at the mechanisms holding the constrained liner to the metal shell (type II), in six hips; failure of the retaining mechanism of the bipolar component (type III), in ten hips; dislocation of the prosthetic head at the inner bearing of the bipolar component (type IV), in three hips; and infection (type V), in twelve hips. The mode of failure remained unknown in one hip that had been revised at another institution. The Osteonics constrained tripolar total hip arthroplasty implant is a complex device involving many parts. We showed that failure of this device can occur at most of its interfaces. It would therefore appear logical to limit its application to salvage situations.
A factor analysis to detect factors influencing building national brand
Directory of Open Access Journals (Sweden)
Naser Azad
Full Text Available Developing a national brand is one of the most important issues for development of a brand. In this study, we present factor analysis to detect the most important factors in building a national brand. The proposed study uses factor analysis to extract the most influencing factors and the sample size has been chosen from two major auto makers in Iran called Iran Khodro and Saipa. The questionnaire was designed in Likert scale and distributed among 235 experts. Cronbach alpha is calculated as 84%, which is well above the minimum desirable limit of 0.70. The implementation of factor analysis provides six factors including “cultural image of customers”, “exciting characteristics”, “competitive pricing strategies”, “perception image” and “previous perceptions”.
The Infinitesimal Jackknife with Exploratory Factor Analysis
Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.
2012-01-01
The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…
Raguideau, Sébastien; Plancade, Sandra; Pons, Nicolas; Leclerc, Marion; Laroche, Béatrice
2016-12-01
Whole Genome Shotgun (WGS) metagenomics is increasingly used to study the structure and functions of complex microbial ecosystems, both from the taxonomic and functional point of view. Gene inventories of otherwise uncultured microbial communities make the direct functional profiling of microbial communities possible. The concept of community aggregated trait has been adapted from environmental and plant functional ecology to the framework of microbial ecology. Community aggregated traits are quantified from WGS data by computing the abundance of relevant marker genes. They can be used to study key processes at the ecosystem level and correlate environmental factors and ecosystem functions. In this paper we propose a novel model based approach to infer combinations of aggregated traits characterizing specific ecosystemic metabolic processes. We formulate a model of these Combined Aggregated Functional Traits (CAFTs) accounting for a hierarchical structure of genes, which are associated on microbial genomes, further linked at the ecosystem level by complex co-occurrences or interactions. The model is completed with constraints specifically designed to exploit available genomic information, in order to favor biologically relevant CAFTs. The CAFTs structure, as well as their intensity in the ecosystem, is obtained by solving a constrained Non-negative Matrix Factorization (NMF) problem. We developed a multicriteria selection procedure for the number of CAFTs. We illustrated our method on the modelling of ecosystemic functional traits of fiber degradation by the human gut microbiota. We used 1408 samples of gene abundances from several high-throughput sequencing projects and found that four CAFTs only were needed to represent the fiber degradation potential. This data reduction highlighted biologically consistent functional patterns while providing a high quality preservation of the original data. Our method is generic and can be applied to other metabolic processes in
Directory of Open Access Journals (Sweden)
Sébastien Raguideau
2016-12-01
Full Text Available Whole Genome Shotgun (WGS metagenomics is increasingly used to study the structure and functions of complex microbial ecosystems, both from the taxonomic and functional point of view. Gene inventories of otherwise uncultured microbial communities make the direct functional profiling of microbial communities possible. The concept of community aggregated trait has been adapted from environmental and plant functional ecology to the framework of microbial ecology. Community aggregated traits are quantified from WGS data by computing the abundance of relevant marker genes. They can be used to study key processes at the ecosystem level and correlate environmental factors and ecosystem functions. In this paper we propose a novel model based approach to infer combinations of aggregated traits characterizing specific ecosystemic metabolic processes. We formulate a model of these Combined Aggregated Functional Traits (CAFTs accounting for a hierarchical structure of genes, which are associated on microbial genomes, further linked at the ecosystem level by complex co-occurrences or interactions. The model is completed with constraints specifically designed to exploit available genomic information, in order to favor biologically relevant CAFTs. The CAFTs structure, as well as their intensity in the ecosystem, is obtained by solving a constrained Non-negative Matrix Factorization (NMF problem. We developed a multicriteria selection procedure for the number of CAFTs. We illustrated our method on the modelling of ecosystemic functional traits of fiber degradation by the human gut microbiota. We used 1408 samples of gene abundances from several high-throughput sequencing projects and found that four CAFTs only were needed to represent the fiber degradation potential. This data reduction highlighted biologically consistent functional patterns while providing a high quality preservation of the original data. Our method is generic and can be applied to other
Analysis of Bernstein's factorization circuit
Lenstra, A.K.; Shamir, A.; Tomlinson, J.; Tromer, E.; Zheng, Y.
2002-01-01
In [1], Bernstein proposed a circuit-based implementation of the matrix step of the number field sieve factorization algorithm. These circuits offer an asymptotic cost reduction under the measure "construction cost x run time". We evaluate the cost of these circuits, in agreement with [1], but argue
Energy Technology Data Exchange (ETDEWEB)
Kartha, Sivan; Baer, Paul; Athanasiou, Tom; Kemp-Benedict, Eric
2008-10-15
This report presents an analysis of the Greenhouse Development Rights framework applied to the case of Sweden. Its objective is to provide useful quantitative guidance on Sweden's role as a leader in our climate constrained world. It presents guidance that is rigorous from the standpoint of climate science and framed in the context of a right to development for the world's poor. This analysis fully accounts for Sweden's true responsibility, by looking beyond territorial emissions alone, and reckoning emissions in terms of Sweden's net 'carbon footprint.' Accounting for carbon embedded in imports, exports and international transport reveals that Sweden's responsibility is 17% larger than would be inferred by considering Sweden's territorial emissions alone. Sweden will naturally have significant obligations under any burden-sharing regime that is based on capacity and responsibility, and only more so under a regime that honors a right to development. Under the GDR framework, our indicative quantification suggests that Sweden's share of responsibility and capacity, and hence its obligation under a politically viable climate regime, will be approximately 0.51% of the global total in 2010. This can be compared to the US's 33%, the EU's 26%, Japan's 7.8%, China's 5.5%, and India's 0.5%. Sweden's 0.51% share of the global total is thus not large in absolute terms, though it is rather large relative to Sweden's small size (0.14% of the global population). These national shares shift over time, as countries' relative proportion of income and emissions change. In light of the emergence of rapidly growing developing country economies, Sweden's share of the global total obligation is projected to decline to 0.43% by 2020, and to 0.35% by 2030. This quantification of Sweden's obligation is useful in two complementary ways. First, if the total global costs of an emergency climate
Multiple factor analysis by example using R
Pagès, Jérôme
2014-01-01
Multiple factor analysis (MFA) enables users to analyze tables of individuals and variables in which the variables are structured into quantitative, qualitative, or mixed groups. Written by the co-developer of this methodology, Multiple Factor Analysis by Example Using R brings together the theoretical and methodological aspects of MFA. It also includes examples of applications and details of how to implement MFA using an R package (FactoMineR).The first two chapters cover the basic factorial analysis methods of principal component analysis (PCA) and multiple correspondence analysis (MCA). The
International Nuclear Information System (INIS)
Fang, Guochang; Tian, Lixin; Fu, Min; Sun, Mei
2014-01-01
This paper explores a novel selective-constrained energy-saving and emission-reduction (ESER) dynamic evolution system, analyzing the impact of cost of conserved energy (CCE), government control, low carbon lifestyle and investment in new technology of ESER on energy intensity and economic growth. Based on artificial neural network, the quantitative coefficients of the actual system are identified. Taking the real situation in China for instance, an empirical study is undertaken by adjusting the parameters of the actual system. The dynamic evolution behavior of energy intensity and economic growth in reality are observed, with the results in perfect agreement with actual situation. The research shows that the introduction of CCE into ESER system will have certain restrictive effect on energy intensity in the earlier period. However, with the further development of the actual system, carbon emissions could be better controlled and energy intensity would decline. In the long run, the impacts of CCE on economic growth are positive. Government control and low carbon lifestyle play a decisive role in controlling ESER system and declining energy intensity. But the influence of government control on economic growth should be considered at the same time and the controlling effect of low carbon lifestyle on energy intensity should be strengthened gradually, while the investment in new technology of ESER can be neglected. Two different cases of ESER are proposed after a comprehensive analysis. The relations between variables and constraint conditions in the ESER system are harmonized remarkably. A better solution to carry out ESER is put forward at last, with numerical simulations being carried out to demonstrate the results. - Highlights: • Use of nonlinear dynamical method to model the selective-constrained ESER system. • Monotonic evolution curves of energy intensity and economic growth are obtained. • Detailed analysis of the game between government control and low
Ackermann, M.; Ajello, M.; Albert, A.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.;
2011-01-01
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10(exp -26) cm(exp 3) / s at 5 GeV to about 5 X 10(exp -23) cm(exp 3)/ s at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section (approx 3 X 10(exp -26) cm(exp 3)/s for a purely s-wave cross section), without assuming additional boost factors.
Constraining Dark Matter Models from a Combined Analysis of Milky Way Satellites with the Fermi Large Area Telescope
Energy Technology Data Exchange (ETDEWEB)
Ackermann, M.; Ajello, M.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Albert, A.; /Taiwan, Natl. Taiwan U. /Ohio State U.; Atwood, W.B.; /UC, Santa Cruz; Baldini, L.; /INFN, Pisa; Ballet, J.; /DAPNIA, Saclay; Barbiellini, G.; /INFN, Trieste /Trieste U.; Bastieri, D.; /INFN, Padua /Padua U.; Bechtol, K.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Bellazzini, R.; /INFN, Pisa; Berenji, B.; Blandford, R.D.; Bloom, E.D.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Bonamente, E.; /INFN, Perugia /Perugia U.; Borgland, A.W.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Bregeon, J.; /INFN, Pisa; Brigida, M.; /Bari Polytechnic /INFN, Bari; Bruel, P.; /Ecole Polytechnique; Buehler, R.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Burnett, T.H.; /Washington U., Seattle; Buson, S.; /INFN, Padua /Padua U. /ICE, Bellaterra /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /INFN, Rome /Rome U. /IASF, Milan /IASF, Milan /DAPNIA, Saclay /INFN, Perugia /Perugia U. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Artep Inc. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /ASDC, Frascati /Perugia U. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Montpellier U. /Stockholm U. /Stockholm U., OKC /ASDC, Frascati /ASDC, Frascati /Udine U. /INFN, Trieste /Bari Polytechnic /INFN, Bari /Naval Research Lab, Wash., D.C. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Montpellier U. /Bari Polytechnic /INFN, Bari /Ecole Polytechnique /NASA, Goddard /Hiroshima U. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Bari Polytechnic /INFN, Bari /INFN, Bari /ASDC, Frascati /NASA, Goddard /INFN, Perugia /Perugia U. /Bari Polytechnic /INFN, Bari /Bologna Observ. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /DAPNIA, Saclay /Alabama U., Huntsville; /more authors..
2012-09-14
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10{sup -26} cm{sup 3} s{sup -1} at 5 GeV to about 5 x 10{sup -23} cm{sup 3} s{sup -1} at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section ({approx}3 x 10{sup -26} cm{sup 3} s{sup -1} for a purely s-wave cross section), without assuming additional boost factors.
Analysis of technological, institutional and socioeconomic factors ...
African Journals Online (AJOL)
Analysis of technological, institutional and socioeconomic factors that influences poor reading culture among secondary school students in Nigeria. ... Proliferation and availability of smart phones, chatting culture and social media were identified as technological factors influencing poor reading culture among secondary ...
Directory of Open Access Journals (Sweden)
Melody K Morris
2011-03-01
Full Text Available Predictive understanding of cell signaling network operation based on general prior knowledge but consistent with empirical data in a specific environmental context is a current challenge in computational biology. Recent work has demonstrated that Boolean logic can be used to create context-specific network models by training proteomic pathway maps to dedicated biochemical data; however, the Boolean formalism is restricted to characterizing protein species as either fully active or inactive. To advance beyond this limitation, we propose a novel form of fuzzy logic sufficiently flexible to model quantitative data but also sufficiently simple to efficiently construct models by training pathway maps on dedicated experimental measurements. Our new approach, termed constrained fuzzy logic (cFL, converts a prior knowledge network (obtained from literature or interactome databases into a computable model that describes graded values of protein activation across multiple pathways. We train a cFL-converted network to experimental data describing hepatocytic protein activation by inflammatory cytokines and demonstrate the application of the resultant trained models for three important purposes: (a generating experimentally testable biological hypotheses concerning pathway crosstalk, (b establishing capability for quantitative prediction of protein activity, and (c prediction and understanding of the cytokine release phenotypic response. Our methodology systematically and quantitatively trains a protein pathway map summarizing curated literature to context-specific biochemical data. This process generates a computable model yielding successful prediction of new test data and offering biological insight into complex datasets that are difficult to fully analyze by intuition alone.
Hand function evaluation: a factor analysis study.
Jarus, T; Poremba, R
1993-05-01
The purpose of this study was to investigate hand function evaluations. Factor analysis with varimax rotation was used to assess the fundamental characteristics of the items included in the Jebsen Hand Function Test and the Smith Hand Function Evaluation. The study sample consisted of 144 subjects without disabilities and 22 subjects with Colles fracture. Results suggest a four factor solution: Factor I--pinch movement; Factor II--grasp; Factor III--target accuracy; and Factor IV--activities of daily living. These categories differentiated the subjects without Colles fracture from the subjects with Colles fracture. A hand function evaluation consisting of these four factors would be useful. Such an evaluation that can be used for current clinical purposes is provided.
Cao, Guangxi; Zhang, Minjia; Li, Qingchen
2017-04-01
This study focuses on multifractal detrended cross-correlation analysis of the different volatility intervals of Mainland China, US, and Hong Kong stock markets. A volatility-constrained multifractal detrended cross-correlation analysis (VC-MF-DCCA) method is proposed to study the volatility conductivity of Mainland China, US, and Hong Kong stock markets. Empirical results indicate that fluctuation may be related to important activities in real markets. The Hang Seng Index (HSI) stock market is more influential than the Shanghai Composite Index (SCI) stock market. Furthermore, the SCI stock market is more influential than the Dow Jones Industrial Average stock market. The conductivity between the HSI and SCI stock markets is the strongest. HSI was the most influential market in the large fluctuation interval of 1991 to 2014. The autoregressive fractionally integrated moving average method is used to verify the validity of VC-MF-DCCA. Results show that VC-MF-DCCA is effective.
Gow, David W; Olson, Bruna B
2015-07-01
Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical "gang effects" in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account.
Integrating human factors into process hazard analysis
International Nuclear Information System (INIS)
Kariuki, S.G.; Loewe, K.
2007-01-01
A comprehensive process hazard analysis (PHA) needs to address human factors. This paper describes an approach that systematically identifies human error in process design and the human factors that influence its production and propagation. It is deductive in nature and therefore considers human error as a top event. The combinations of different factors that may lead to this top event are analysed. It is qualitative in nature and is used in combination with other PHA methods. The method has an advantage because it does not look at the operator error as the sole contributor to the human failure within a system but a combination of all underlying factors
Athwal, Kiron K; El Daou, Hadi; Inderhaug, Eivind; Manning, William; Davies, Andrew J; Deehan, David J; Amis, Andrew A
2017-08-01
The aim of this study was to quantify the medial soft tissue contributions to stability following constrained condylar (CC) total knee arthroplasty (TKA) and determine whether a medial reconstruction could restore stability to a soft tissue-deficient, CC-TKA knee. Eight cadaveric knees were mounted in a robotic system and tested at 0°, 30°, 60°, and 90° of flexion with ±50 N anterior-posterior force, ±8 Nm varus-valgus, and ±5 Nm internal-external torque. The deep and superficial medial collateral ligaments (dMCL, sMCL) and posteromedial capsule (PMC) were transected and their relative contributions to stabilising the applied loads were quantified. After complete medial soft tissue transection, a reconstruction using a semitendinosus tendon graft was performed, and the effect on kinematic behaviour under equivocal conditions was measured. In the CC-TKA knee, the sMCL was the major medial restraint in anterior drawer, internal-external, and valgus rotation. No significant differences were found between the rotational laxities of the reconstructed knee to the pre-deficient state for the arc of motion examined. The relative contribution of the reconstruction was higher in valgus rotation at 60° than the sMCL; otherwise, the contribution of the reconstruction was similar to that of the sMCL. There is contention whether a CC-TKA can function with medial deficiency or more constraint is required. This work has shown that a CC-TKA may not provide enough stability with an absent sMCL. However, in such cases, combining the CC-TKA with a medial soft tissue reconstruction may be considered as an alternative to a hinged implant.
Crosley, M. K.; Osten, R. A.
2018-03-01
Stellar coronal mass ejections remain experimentally unconstrained, unlike their stellar flare counterparts, which are observed ubiquitously across the electromagnetic spectrum. Low-frequency radio bursts in the form of a type II burst offer the best means of identifying and constraining the rate and properties of stellar CMEs. CME properties can be further improved through the use of proposed solar-stellar scaling relations and multi-wavelength observations of CMEs through the use of type II bursts and the associated flares expected to occur alongside them. We report on 20 hr of observation of the nearby, magnetically active, and well-characterized M dwarf star EQ Peg. The observations are simultaneously observed with the Jansky Very Large Array at their P-band (230–470 MHz) and at the Apache Point observatory in the SDSS u‧ filter (λ = 3557 Å). Dynamic spectra of the P-band data, constructed to search for signals in the frequency-time domains, did not reveal evidence of drifting radio bursts that could be ascribed to type II bursts. Given the sensitivity of our observations, we are able to place limits on the brightness temperature and source size of any bursts that may have occurred. Using solar scaling rations on four observed stellar flares, we predict CME parameters. Given the constraints on coronal density and photospheric field strength, our models suggest that the observed flares would have been insufficient to produce detectable type II bursts at our observed frequencies. We consider the implications of these results, and other recent findings, on stellar mass loss.
Evolutionary constrained optimization
Deb, Kalyanmoy
2015-01-01
This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...
Analysis of Economic Factors Affecting Stock Market
Xie, Linyin
2010-01-01
This dissertation concentrates on analysis of economic factors affecting Chinese stock market through examining relationship between stock market index and economic factors. Six economic variables are examined: industrial production, money supply 1, money supply 2, exchange rate, long-term government bond yield and real estate total value. Stock market comprises fixed interest stocks and equities shares. In this dissertation, stock market is restricted to equity market. The stock price in thi...
Trautmann-Lengsfeld, Sina Alexa; Domínguez-Borràs, Judith; Escera, Carles; Herrmann, Manfred; Fehr, Thorsten
2013-01-01
A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach. PMID:23818974
Factor Economic Analysis at Forestry Enterprises
Directory of Open Access Journals (Sweden)
M.Yu. Chik
2018-03-01
Full Text Available The article studies the importance of economic analysis according to the results of research of scientific works of domestic and foreign scientists. The calculation of the influence of factors on the change in the cost of harvesting timber products by cost items has been performed. The results of the calculation of the influence of factors on the change of costs on 1 UAH are determined using the full cost of sold products. The variable and fixed costs and their distribution are allocated that influences the calculation of the impact of factors on cost changes on 1 UAH of sold products. The paper singles out the general results of calculating the influence of factors on cost changes on 1 UAH of sold products. According to the results of the analysis, the list of reserves for reducing the cost of production at forest enterprises was proposed. The main sources of reserves for reducing the prime cost of forest products at forest enterprises are investigated based on the conducted factor analysis.
An SPSSR -Menu for Ordinal Factor Analysis
Directory of Open Access Journals (Sweden)
Mario Basto
2012-01-01
Full Text Available Exploratory factor analysis is a widely used statistical technique in the social sciences. It attempts to identify underlying factors that explain the pattern of correlations within a set of observed variables. A statistical software package is needed to perform the calculations. However, there are some limitations with popular statistical software packages, like SPSS. The R programming language is a free software package for statistical and graphical computing. It offers many packages written by contributors from all over the world and programming resources that allow it to overcome the dialog limitations of SPSS. This paper offers an SPSS dialog written in theR programming language with the help of some packages, so that researchers with little or no knowledge in programming, or those who are accustomed to making their calculations based on statistical dialogs, have more options when applying factor analysis to their data and hence can adopt a better approach when dealing with ordinal, Likert-type data.
International Nuclear Information System (INIS)
Guzzardo, T.; Livesay, J.
2011-01-01
Researchers at Oak Ridge National Laboratory (ORNL) developed the Adaptable, Multiplatform, Real-Time Analysis Package (AMRAP) for the continuous measurement of environmental radionuclide decay. AMRAP is a completely open source visualization and analysis package capable of combining a variety of data streams into an array of real-time plots. Once acquired, data streams are analyzed to store static images and extract data based on previously defined thresholds. AMRAP is currently used at ORNL to combine data streams from an Ortec Detective high-purity germanium (HPGe) detector, a TSA Systems radiation portal monitor (RPM), and an Orion weather station. The combined data are used to study the rain-induced increase in RPM background radiation levels. RPMs experience an increase in background radiation during precipitation due to the deposition of atmospheric radionuclides on the ground. Using AMRAP results in a real-time analysis workstation specifically dedicated to the study of RPM background radiation levels. By means of an editable library of common inputs, AMRAP is adaptable to remote monitoring applications that would benefit from the real-time visualization and analysis of radiation measurements. To study rain-induced increases in background radiation levels observed in radiation portal monitors (RPMs), researchers at Oak Ridge National Laboratory (ORNL) developed a software package that allows data with different formats to be analyzed and plotted in near real time. The Adaptable, Multiplatform, Real-Time Analysis Package (AMRAP) was developed to operate in the background and capture plots of important data based on previously defined thresholds. After executing AMRAP, segments of a data stream can be captured without additional post-processing. AMRAP can also display previously recorded data to facilitate a detailed offline analysis. Without access to these capabilities in a single software package, analyzing multiple continuously recorded data streams with
DEFF Research Database (Denmark)
Backman, Tyler W.H.; Ando, David; Singh, Jahnavi
2018-01-01
for a minimum of fluxes into core metabolism to satisfy these experimental constraints. Together, these methods accelerate and automate the identification of a biologically reasonable set of core reactions for use with 13C MFA or 2S- 13C MFA, as well as provide for a substantially lower set of flux bounds......Determination of internal metabolic fluxes is crucial for fundamental and applied biology because they map how carbon and electrons flow through metabolism to enable cell function. 13C Metabolic Flux Analysis (13C MFA) and Two-Scale 13C Metabolic Flux Analysis (2S-13C MFA) are two techniques used...
Exploring Constrained Creative Communication
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk
2017-01-01
Creative collaboration via online tools offers a less ‘media rich’ exchange of information between participants than face-to-face collaboration. The participants’ freedom to communicate is restricted in means of communication, and rectified in terms of possibilities offered in the interface. How do...... these constrains influence the creative process and the outcome? In order to isolate the communication problem from the interface- and technology problem, we examine via a design game the creative communication on an open-ended task in a highly constrained setting, a design game. Via an experiment the relation...... between communicative constrains and participants’ perception of dialogue and creativity is examined. Four batches of students preparing for forming semester project groups were conducted and documented. Students were asked to create an unspecified object without any exchange of communication except...
Choosing health, constrained choices.
Chee Khoon Chan
2009-12-01
In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Directory of Open Access Journals (Sweden)
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
Nominal Performance Biosphere Dose Conversion Factor Analysis
International Nuclear Information System (INIS)
M. Wasiolek
2004-01-01
This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose
Regression and kriging analysis for grid power factor estimation
Directory of Open Access Journals (Sweden)
Rajesh Guntaka
2014-12-01
Full Text Available The measurement of power factor (PF in electrical utility grids is a mainstay of load balancing and is also a critical element of transmission and distribution efficiency. The measurement of PF dates back to the earliest periods of electrical power distribution to public grids. In the wide-area distribution grid, measurement of current waveforms is trivial and may be accomplished at any point in the grid using a current tap transformer. However, voltage measurement requires reference to ground and so is more problematic and measurements are normally constrained to points that have ready and easy access to a ground source. We present two mathematical analysis methods based on kriging and linear least square estimation (LLSE (regression to derive PF at nodes with unknown voltages that are within a perimeter of sample nodes with ground reference across a selected power grid. Our results indicate an error average of 1.884% that is within acceptable tolerances for PF measurements that are used in load balancing tasks.
Tokariev, Anton; Vanhatalo, Sampsa; Palva, J Matias
2016-01-01
To assess how the recording montage in the neonatal EEG influences the detection of cortical source signals and their phase interactions. Scalp EEG was simulated by forward modeling 20-200 simultaneously active sources covering the cortical surface of a realistic neonatal head model. We assessed systematically how the number of scalp electrodes (11-85), analysis montage, or the size of cortical sources affect the detection of cortical phase synchrony. Statistical metrics were developed for quantifying the resolution and reliability of the montages. The findings converge to show that an increase in the number of recording electrodes leads to a systematic improvement in the detection of true cortical phase synchrony. While there is always a ceiling effect with respect to discernible cortical details, we show that the average and Laplacian montages exhibit superior specificity and sensitivity as compared to other conventional montages. Reliability in assessing true neonatal cortical synchrony is directly related to the choice of EEG recording and analysis configurations. Because of the high conductivity of the neonatal skull, the conventional neonatal EEG recordings are spatially far too sparse for pertinent studies, and this loss of information cannot be recovered by re-montaging during analysis. Future neonatal EEG studies will need prospective planning of recording configuration to allow analysis of spatial details required by each study question. Our findings also advice about the level of details in brain synchrony that can be studied with existing datasets or by using conventional EEG recordings. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Kerschke, Pascal
2017-01-01
Choosing the best-performing optimizer(s) out of a portfolio of optimization algorithms is usually a difficult and complex task. It gets even worse, if the underlying functions are unknown, i.e., so-called Black-Box problems, and function evaluations are considered to be expensive. In the case of continuous single-objective optimization problems, Exploratory Landscape Analysis (ELA) - a sophisticated and effective approach for characterizing the landscapes of such problems by means of numeric...
Directory of Open Access Journals (Sweden)
Jyuo-Min Shyu
2010-11-01
Full Text Available A great deal of work has been done to develop techniques for odor analysis by electronic nose systems. These analyses mostly focus on identifying a particular odor by comparing with a known odor dataset. However, in many situations, it would be more practical if each individual odorant could be determined directly. This paper proposes two methods for such odor components analysis for electronic nose systems. First, a K-nearest neighbor (KNN-based local weighted nearest neighbor (LWNN algorithm is proposed to determine the components of an odor. According to the component analysis, the odor training data is firstly categorized into several groups, each of which is represented by its centroid. The examined odor is then classified as the class of the nearest centroid. The distance between the examined odor and the centroid is calculated based on a weighting scheme, which captures the local structure of each predefined group. To further determine the concentration of each component, odor models are built by regressions. Then, a weighted and constrained least-squares (WCLS method is proposed to estimate the component concentrations. Experiments were carried out to assess the effectiveness of the proposed methods. The LWNN algorithm is able to classify mixed odors with different mixing ratios, while the WCLS method can provide good estimates on component concentrations.
Factor analysis for exercise stress radionuclide ventriculography
International Nuclear Information System (INIS)
Hirota, Kazuyoshi; Yasuda, Mitsutaka; Oku, Hisao; Ikuno, Yoshiyasu; Takeuchi, Kazuhide; Takeda, Tadanao; Ochi, Hironobu
1987-01-01
Using factor analysis, a new image processing in exercise stress radionuclide ventriculography, changes in factors associated with exercise were evaluated in 14 patients with angina pectoris or old myocardial infarction. The patients were imaged in the left anterior oblique projection, and three factor images were presented on a color coded scale. Abnormal factors (AF) were observed in 6 patients before exercise, 13 during exercise, and 4 after exercise. In 7 patients, the occurrence of AF was associated with exercise. Five of them became free from AF after exercise. Three patients showing AF before exercise had aggravation of AF during exercise. Overall, the occurrence or aggravation of AF was associated with exercise in ten (71 %) of the patients. The other three patients, however, had disappearance of AF during exercise. In the last patient, none of the AF was observed throughout the study. In view of a high incidence of AF associated with exercise, the factor analysis may have the potential in evaluating cardiac reverse from the viewpoint of left ventricular wall motion abnormality. (Namekawa, K.)
Correction factor for hair analysis by PIXE
International Nuclear Information System (INIS)
Montenegro, E.C.; Baptista, G.B.; Castro Faria, L.V. de; Paschoa, A.S.
1980-01-01
The application of the Particle Induced X-ray Emission (PIXE) technique to analyse quantitatively the elemental composition of hair specimens brings about some difficulties in the interpretation of the data. The present paper proposes a correction factor to account for the effects of the energy loss of the incident particle with penetration depth, and X-ray self-absorption when a particular geometrical distribution of elements in hair is assumed for calculational purposes. The correction factor has been applied to the analysis of hair contents Zn, Cu and Ca as a function of the energy of the incident particle. (orig.)
Boolean Factor Analysis by Attractor Neural Network
Czech Academy of Sciences Publication Activity Database
Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.
2007-01-01
Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007
Correction factor for hair analysis by PIXE
International Nuclear Information System (INIS)
Montenegro, E.C.; Baptista, G.B.; Castro Faria, L.V. de; Paschoa, A.S.
1979-06-01
The application of the Particle Induced X-ray Emission (PIXE) technique to analyse quantitatively the elemental composition of hair specimens brings about some difficulties in the interpretation of the data. The present paper proposes a correction factor to account for the effects of energy loss of the incident particle with penetration depth, and x-ray self-absorption when a particular geometrical distribution of elements in hair is assumed for calculational purposes. The correction factor has been applied to the analysis of hair contents Zn, Cu and Ca as a function of the energy of the incident particle.(Author) [pt
Nominal Performance Biosphere Dose Conversion Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
M.A. Wasiolek
2003-07-25
This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports (BSC 2003 [DIRS 160964]; BSC 2003 [DIRS 160965]; BSC 2003 [DIRS 160976]; BSC 2003 [DIRS 161239]; BSC 2003 [DIRS 161241]) contain detailed description of the model input parameters. This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs and conversion factors for the TSPA. The BDCFs will be used in performance assessment for calculating annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose from beta- and photon-emitting radionuclides.
Gong, Maozhen
Selecting an appropriate prior distribution is a fundamental issue in Bayesian Statistics. In this dissertation, under the framework provided by Berger and Bernardo, I derive the reference priors for several models which include: Analysis of Variance (ANOVA)/Analysis of Covariance (ANCOVA) models with a categorical variable under common ordering constraints, the conditionally autoregressive (CAR) models and the simultaneous autoregressive (SAR) models with a spatial autoregression parameter rho considered. The performances of reference priors for ANOVA/ANCOVA models are evaluated by simulation studies with comparisons to Jeffreys' prior and Least Squares Estimation (LSE). The priors are then illustrated in a Bayesian model of the "Risk of Type 2 Diabetes in New Mexico" data, where the relationship between the type 2 diabetes risk (through Hemoglobin A1c) and different smoking levels is investigated. In both simulation studies and real data set modeling, the reference priors that incorporate internal order information show good performances and can be used as default priors. The reference priors for the CAR and SAR models are also illustrated in the "1999 SAT State Average Verbal Scores" data with a comparison to a Uniform prior distribution. Due to the complexity of the reference priors for both CAR and SAR models, only a portion (12 states in the Midwest) of the original data set is considered. The reference priors can give a different marginal posterior distribution compared to a Uniform prior, which provides an alternative for prior specifications for areal data in Spatial statistics.
Nominal Performance Biosphere Dose Conversion Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
M.A. Wasiolek
2005-04-28
This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis
Nominal Performance Biosphere Dose Conversion Factor Analysis
International Nuclear Information System (INIS)
M.A. Wasiolek
2005-01-01
This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objectives of this analysis are to develop BDCFs for the
Constrained superfields in supergravity
Energy Technology Data Exchange (ETDEWEB)
Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)
2016-02-16
We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.
Minimal constrained supergravity
Energy Technology Data Exchange (ETDEWEB)
Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)
2017-01-10
We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.
Minimal constrained supergravity
Directory of Open Access Journals (Sweden)
N. Cribiori
2017-01-01
Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.
Minimal constrained supergravity
International Nuclear Information System (INIS)
Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.
2017-01-01
We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.
International Nuclear Information System (INIS)
Sartandel, S.J.; Jha, S.K.; Puranik, V.D.
2012-01-01
In this study, an accurate faster gamma spectrometry method for measuring the low level activity concentrations of 137 Cs using in situ pre-concentration technique on copper ferrocyanide cartridge was standardized. Due to unavailability of reference standard in the copper ferrocyanide matrix, efficiency calibration curves were plotted using RGU and RGTh reference standards. To harmonize the difference in density of standard and sample the required density correction factors for photo peak efficiency were generated. The in situ pre-concentration technique followed by gamma-ray spectrometry was applied for activity determination in surface seawater from eight locations in the coastal marine environment of Arabian Sea. The mean activity concentration of 137 Cs ranged between 0.71 and 0.91 Bq/m 3 . Higher activity concentrations were observed at location with latitude, longitude of 21.6 deg N, 69.57 deg E as compared to concentration observed at location with latitude, longitude 16.98 deg N, 73.25 deg E. The observed concentrations were found to be in range of data reported in Asia-Pacific Marine radioactive database (ASPARMARD). The results will fill up the gaps in the existing database. The generated data will be useful for monitoring fresh input of anthropogenic radionuclide into coastal marine environment for post Fukushima environmental assessment. (author)
Directory of Open Access Journals (Sweden)
Tyler W. H. Backman
2018-01-01
Full Text Available Determination of internal metabolic fluxes is crucial for fundamental and applied biology because they map how carbon and electrons flow through metabolism to enable cell function. 13 C Metabolic Flux Analysis ( 13 C MFA and Two-Scale 13 C Metabolic Flux Analysis (2S- 13 C MFA are two techniques used to determine such fluxes. Both operate on the simplifying approximation that metabolic flux from peripheral metabolism into central “core” carbon metabolism is minimal, and can be omitted when modeling isotopic labeling in core metabolism. The validity of this “two-scale” or “bow tie” approximation is supported both by the ability to accurately model experimental isotopic labeling data, and by experimentally verified metabolic engineering predictions using these methods. However, the boundaries of core metabolism that satisfy this approximation can vary across species, and across cell culture conditions. Here, we present a set of algorithms that (1 systematically calculate flux bounds for any specified “core” of a genome-scale model so as to satisfy the bow tie approximation and (2 automatically identify an updated set of core reactions that can satisfy this approximation more efficiently. First, we leverage linear programming to simultaneously identify the lowest fluxes from peripheral metabolism into core metabolism compatible with the observed growth rate and extracellular metabolite exchange fluxes. Second, we use Simulated Annealing to identify an updated set of core reactions that allow for a minimum of fluxes into core metabolism to satisfy these experimental constraints. Together, these methods accelerate and automate the identification of a biologically reasonable set of core reactions for use with 13 C MFA or 2S- 13 C MFA, as well as provide for a substantially lower set of flux bounds for fluxes into the core as compared with previous methods. We provide an open source Python implementation of these algorithms at https://github.com/JBEI/limitfluxtocore.
Confirmatory factor analysis using Microsoft Excel.
Miles, Jeremy N V
2005-11-01
This article presents a method for using Microsoft (MS) Excel for confirmatory factor analysis (CFA). CFA is often seen as an impenetrable technique, and thus, when it is taught, there is frequently little explanation of the mechanisms or underlying calculations. The aim of this article is to demonstrate that this is not the case; it is relatively straightforward to produce a spreadsheet in MS Excel that can carry out simple CFA. It is possible, with few or no programming skills, to effectively program a CFA analysis and, thus, to gain insight into the workings of the procedure.
A kernel version of spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
. Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...
Hard exclusive meson production to constrain GPDs
Energy Technology Data Exchange (ETDEWEB)
Wolbeek, Johannes ter; Fischer, Horst; Gorzellik, Matthias; Gross, Arne; Joerg, Philipp; Koenigsmann, Kay; Malm, Pasquale; Regali, Christopher; Schmidt, Katharina; Sirtl, Stefan; Szameitat, Tobias [Physikalisches Institut, Albert-Ludwigs-Universitaet Freiburg, Freiburg im Breisgau (Germany); Collaboration: COMPASS Collaboration
2014-07-01
The concept of Generalized Parton Distributions (GPDs) combines the two-dimensional spatial information, given by form factors, with the longitudinal momentum information from the PDFs. Thus, GPDs provide a three-dimensional 'tomography' of the nucleon. Furthermore, according to Ji's sum rule, the GPDs H and E enable access to the total angular momenta of quarks, antiquarks and gluons. While H can be approached using electroproduction cross section, hard exclusive meson production off a transversely polarized target can help to constrain the GPD E. At the COMPASS experiment at CERN, two periods of data taking were performed in 2007 and 2010, using a longitudinally polarized 160 GeV/c muon beam and a transversely polarized NH{sub 3} target. This talk introduces the data analysis of the process μ + p → μ' + p' + V, and recent results are presented.
International Nuclear Information System (INIS)
Vasil, Geoffrey M.; Lecoanet, Daniel; Brown, Benjamin P.; Zweibel, Ellen G.; Wood, Toby S.
2013-01-01
The speed of sound greatly exceeds typical flow velocities in many stellar and planetary interiors. To follow the slow evolution of subsonic motions, various sound-proof models attempt to remove fast acoustic waves while retaining stratified convection and buoyancy dynamics. In astrophysics, anelastic models typically receive the most attention in the class of sound-filtered stratified models. Generally, anelastic models remain valid in nearly adiabatically stratified regions like stellar convection zones, but may break down in strongly sub-adiabatic, stably stratified layers common in stellar radiative zones. However, studying stellar rotation, circulation, and dynamos requires understanding the complex coupling between convection and radiative zones, and this requires robust equations valid in both regimes. Here we extend the analysis of equation sets begun in Brown et al., which studied anelastic models, to two types of pseudo-incompressible models. This class of models has received attention in atmospheric applications, and more recently in studies of white-dwarf supernova progenitors. We demonstrate that one model conserves energy but the other does not. We use Lagrangian variational methods to extend the energy conserving model to a general equation of state, and dub the resulting equation set the generalized pseudo-incompressible (GPI) model. We show that the GPI equations suitably capture low-frequency phenomena in both convection and radiative zones in stars and other stratified systems, and we provide recommendations for converting low-Mach number codes to this equation set
Ueki, Kenta; Iwamori, Hikaru
2017-10-01
In this study, with a view of understanding the structure of high-dimensional geochemical data and discussing the chemical processes at work in the evolution of arc magmas, we employed principal component analysis (PCA) to evaluate the compositional variations of volcanic rocks from the Sengan volcanic cluster of the Northeastern Japan Arc. We analyzed the trace element compositions of various arc volcanic rocks, sampled from 17 different volcanoes in a volcanic cluster. The PCA results demonstrated that the first three principal components accounted for 86% of the geochemical variation in the magma of the Sengan region. Based on the relationships between the principal components and the major elements, the mass-balance relationships with respect to the contributions of minerals, the composition of plagioclase phenocrysts, geothermal gradient, and seismic velocity structure in the crust, the first, the second, and the third principal components appear to represent magma mixing, crystallizations of olivine/pyroxene, and crystallizations of plagioclase, respectively. These represented 59%, 20%, and 6%, respectively, of the variance in the entire compositional range, indicating that magma mixing accounted for the largest variance in the geochemical variation of the arc magma. Our result indicated that crustal processes dominate the geochemical variation of magma in the Sengan volcanic cluster.
DISRUPTIVE EVENT BIOSPHERE DOSE CONVERSION FACTOR ANALYSIS
International Nuclear Information System (INIS)
M.A. Wasiolek
2005-01-01
This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The Biosphere Model Report (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objective of this analysis was to develop the BDCFs for the volcanic
Analysis of mineral phases in coal utilizing factor analysis
International Nuclear Information System (INIS)
Roscoe, B.A.; Hopke, P.K.
1982-01-01
The mineral phase inclusions of coal are discussed. The contribution of these to a coal sample are determined utilizing several techniques. Neutron activation analysis in conjunction with coal washability studies have produced some information on the general trends of elemental variation in the mineral phases. These results have been enhanced by the use of various statistical techniques. The target transformation factor analysis is specifically discussed and shown to be able to produce elemental profiles of the mineral phases in coal. A data set consisting of physically fractionated coal samples was generated. These samples were analyzed by neutron activation analysis and then their elemental concentrations examined using TTFA. Information concerning the mineral phases in coal can thus be acquired from factor analysis even with limited data. Additional data may permit the resolution of additional mineral phases as well as refinement of theose already identified
A Beginners Guide to Factor Analysis: Focusing on Exploratory Factor Analysis
Directory of Open Access Journals (Sweden)
An Gie Yong
2013-10-01
Full Text Available The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.
Nominal Performance Biosphere Dose Conversion Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2004-09-08
This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle
Determining the Number of Factors in P-Technique Factor Analysis
Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael
2017-01-01
Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…
Disruptive Event Biosphere Dose Conversion Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
M. A. Wasiolek
2003-07-21
This analysis report, ''Disruptive Event Biosphere Dose Conversion Factor Analysis'', is one of the technical reports containing documentation of the ERMYN (Environmental Radiation Model for Yucca Mountain Nevada) biosphere model for the geologic repository at Yucca Mountain, its input parameters, and the application of the model to perform the dose assessment for the repository. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of the two reports that develop biosphere dose conversion factors (BDCFs), which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the conceptual model as well as the mathematical model and lists its input parameters. Model input parameters are developed and described in detail in five analysis report (BSC 2003 [DIRS 160964], BSC 2003 [DIRS 160965], BSC 2003 [DIRS 160976], BSC 2003 [DIRS 161239], and BSC 2003 [DIRS 161241]). The objective of this analysis was to develop the BDCFs for the volcanic ash exposure scenario and the dose factors (DFs) for calculating inhalation doses during volcanic eruption (eruption phase of the volcanic event). The volcanic ash exposure scenario is hereafter referred to as the volcanic ash scenario. For the volcanic ash scenario, the mode of radionuclide release into the biosphere is a volcanic eruption through the repository with the resulting entrainment of contaminated waste in the tephra and the subsequent atmospheric transport and dispersion of contaminated material in
Disruptive Event Biosphere Dose Conversion Factor Analysis
International Nuclear Information System (INIS)
M. A. Wasiolek
2003-01-01
This analysis report, ''Disruptive Event Biosphere Dose Conversion Factor Analysis'', is one of the technical reports containing documentation of the ERMYN (Environmental Radiation Model for Yucca Mountain Nevada) biosphere model for the geologic repository at Yucca Mountain, its input parameters, and the application of the model to perform the dose assessment for the repository. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of the two reports that develop biosphere dose conversion factors (BDCFs), which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the conceptual model as well as the mathematical model and lists its input parameters. Model input parameters are developed and described in detail in five analysis report (BSC 2003 [DIRS 160964], BSC 2003 [DIRS 160965], BSC 2003 [DIRS 160976], BSC 2003 [DIRS 161239], and BSC 2003 [DIRS 161241]). The objective of this analysis was to develop the BDCFs for the volcanic ash exposure scenario and the dose factors (DFs) for calculating inhalation doses during volcanic eruption (eruption phase of the volcanic event). The volcanic ash exposure scenario is hereafter referred to as the volcanic ash scenario. For the volcanic ash scenario, the mode of radionuclide release into the biosphere is a volcanic eruption through the repository with the resulting entrainment of contaminated waste in the tephra and the subsequent atmospheric transport and dispersion of contaminated material in the biosphere. The biosphere process
Exploratory Bi-Factor Analysis: The Oblique Case
Jennrich, Robert I.; Bentler, Peter M.
2012-01-01
Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford ("Psychometrika" 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler ("Psychometrika" 76:537-549, 2011) introduced an exploratory form of bi-factor…
Exploratory factor analysis in Rehabilitation Psychology: a content analysis.
Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N
2014-11-01
Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.
Exploratory Bi-factor Analysis: The Oblique Case
Jennrich, Robert L.; Bentler, Peter M.
2011-01-01
Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori. They use exploratory factor analysis and a bi-factor rotation criterion designed to produce a rotated loading mat...
Constrained Vapor Bubble Experiment
Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi
2002-11-01
Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.
Constrained noninformative priors
International Nuclear Information System (INIS)
Atwood, C.L.
1994-10-01
The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given
Daentzer, Dorothea; Welke, Bastian; Hurschler, Christof; Husmann, Nathalie; Jansen, Christina; Flamme, Christian Heinrich; Richter, Berna Ida
2015-03-24
As an alternative technique to arthrodesis of the cervical spine, total disc replacement (TDR) has increasingly been used with the aim of restoration of the physiological function of the treated and adjacent motions segments. The purpose of this experimental study was to analyze the kinematics of the target level as well as of the adjacent segments, and to measure the pressures in the proximal and distal disc after arthrodesis as well as after arthroplasty with two different semi-constrained types of prosthesis. Twelve cadaveric ovine cervical spines underwent polysegmental (C2-5) multidirectional flexibility testing with a sensor-guided industrial serial robot. Additionally, pressures were recorded in the proximal and distal disc. The following three conditions were tested: (1) intact specimen, (2) single-level arthrodesis C3/4, (3) single-level TDR C3/4 using the Discover® in the first six specimens and the activ® C in the other six cadavers. Statistical analysis was performed for the total range of motion (ROM), the intervertebral ROM (iROM) and the intradiscal pressures (IDP) to compare both the three different conditions as well as the two disc prosthesis among each other. The relative iROM in the target level was always lowered after fusion in the three directions of motion. In almost all cases, the relative iROM of the adjacent segments was almost always higher compared to the physiologic condition. After arthroplasty, we found increased relative iROM in the treated level in comparison to intact state in almost all cases, with relative iROM in the adjacent segments observed to be lower in almost all situations. The IDP in both adjacent discs always increased in flexion and extension after arthrodesis. In all but five cases, the IDP in each of the adjacent level was decreased below the values of the intact specimens after TDR. Overall, in none of the analyzed parameters were statistically significantly differences between both types of prostheses
Disruptive Event Biosphere Dose Conversion Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2004-09-08
This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2004 [DIRS 169671]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis''. The objective of this
Scalable group level probabilistic sparse factor analysis
DEFF Research Database (Denmark)
Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard
2017-01-01
Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...
Disruptive Event Biosphere Dose Conversion Factor Analysis
International Nuclear Information System (INIS)
M. Wasiolek
2004-01-01
This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2004 [DIRS 169671]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis''. The objective of this analysis was to develop the BDCFs for the volcanic ash
Kernel parameter dependence in spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....
Schambeau, C.; Fernández, Y.; Samarasinha, N.; Mueller, B.; Woodney, L.; Lisse, C.; Kelley, M.; Meech, K.
2014-07-01
Introduction: 29P/Schwassmann-Wachmann 1 (SW1) is a unique comet (and Centaur) with an almost circular orbit just outside the orbit of Jupiter. This orbit results in SW1 receiving a nearly constant insolation, thus giving a simpler environment in which to study thermal properties and behaviors of this comet's nucleus. Such knowledge is crucial for improving our understanding of coma morphology, nuclear thermal evolution, and nuclear structure. To this end, our overarching goal is to develop a thermophysical model of SW1's nucleus that makes use of realistic physical and structural properties as inputs. This model will help to explain the highly variable gas- and dust-production rates of this comet; SW1 is well known for its frequent but stochastic outbursts of mass loss [1,2,3]. Here we will report new constraints on the effective radius, beaming parameter, spin state, and location of active regions on the nucleus of SW1. Results: The analysis completed so far consists of a re-analysis of Spitzer Space Telescope thermal-IR images of SW1 from UT 2003 November 21 and 24, when SW1 was observed outside of outburst. The images are from Spitzer's IRAC 5.8-μm and 8.0-μm bands and MIPS 24.0-μm and 70-μm bands. This analysis is similar to that of Stansberry et al. [4, 5], but with data products generated from the latest Spitzer pipeline. Also, analysis of the 5.8-μm image had not been reported before. Coma removal techniques (e.g., Fernández et al. [6]) were applied to each image letting us measure the nuclear point-source contribution to each image. The measured flux densities for each band were fit with a Near Earth Asteroid Thermal Model (NEATM, [7]) and resulted in values for the effective radius of SW1's nucleus, constraints on the thermal inertia, and an IR beaming-parameter value. Current efforts have shifted to constraining the spin properties of SW1's nucleus and surface areas of activity through use of an existing Monte Carlo model [8, 9] to reproduce
Morag, N.; Haviv, I.; Katzir, Y.
2013-12-01
The Troodos Massif of Cyprus, rising to nearly 2000 meters above sea level, encompasses one of the world's classic ophiolites. Following its formation at a seafloor spreading center in Late Cretaceous times, this slice of the NeoTethyan oceanic lithosphere was uplifted and eventually exposed on mountain tops during the Neogene. The final uplift and exhumation of the Troodos was previously assigned to Pleistocene age by observations in the circum-Troodos sedimentary strata. However, quantitative thermochronological and geomorphological data from the Massif itself were not available. Here we use apatite (U-Th)/He low-temperature thermochronology complemented by zircon (U-Th)/He and apatite fission track data, and combined with geomorphic analysis to constrain the exhumation and uplift history of the Troodos ophiolite. Apatite (U-Th)/He ages vary with depth from ~ 22 Ma at the top of the Gabbro sequence to ~ 6 Ma at the bottom of the sequence. The deepest sample from a Gabbro pegmatitic dyke intruding the ultramafic sequence yielded an age of ~ 3 Ma. Thermal modeling of apatite (U-Th)/He and fission track data delineates Plio - Pleistocene initiation of rapid uplift and exhumation of the Troodos ophiolite. The estimated cumulative exhumation since its initiation is 2-3 km. No evidence was found for significant uplift of the central Troodos area prior to that time. The geomorphic analysis delineates a bull's-eye zone at the center of the Troodos Massif, where local relief and channel steepness index are highest. The boundaries of this zone roughly correspond with the Mt. Olympus mantle outcrop and suggest recent, differential uplift of this zone relative to its surroundings. The most likely mechanism, which could drive such a focused bull's-eye uplift pattern is hydration of ultramafic rocks (serpentinization) leading to a decrease in rock density and subsequent diapiric uplift of the serpentinized lithospheric mantle.
Dispersive analysis of the pion transition form factor
Hoferichter, M.; Kubis, B.; Leupold, S.; Niecknig, F.; Schneider, S. P.
2014-11-01
We analyze the pion transition form factor using dispersion theory. We calculate the singly-virtual form factor in the time-like region based on data for the cross section, generalizing previous studies on decays and scattering, and verify our result by comparing to data. We perform the analytic continuation to the space-like region, predicting the poorly-constrained space-like transition form factor below , and extract the slope of the form factor at vanishing momentum transfer . We derive the dispersive formalism necessary for the extension of these results to the doubly-virtual case, as required for the pion-pole contribution to hadronic light-by-light scattering in the anomalous magnetic moment of the muon.
DEFF Research Database (Denmark)
Yiu, Man Lung; Karras, Panagiotis; Mamoulis, Nikos
2008-01-01
. This new operation has important applications in decision support, e.g., placing recycling stations at fair locations between restaurants and residential complexes. Clearly, RCJ is defined based on a geometric constraint but not on distances between points. Thus, our operation is fundamentally different......We introduce a novel spatial join operator, the ring-constrained join (RCJ). Given two sets P and Q of spatial points, the result of RCJ consists of pairs (p, q) (where p ε P, q ε Q) satisfying an intuitive geometric constraint: the smallest circle enclosing p and q contains no other points in P, Q...
Constraining the 7Be(p,γ)8B S-factor with the new precise 7Be solar neutrino flux from Borexino
Takács, M. P.; Bemmerer, D.; Junghans, A. R.; Zuber, K.
2018-02-01
Among the solar fusion reactions, the rate of the 7Be(p , γ)8B reaction is one of the most difficult to determine rates. In a number of previous experiments, its astrophysical S-factor has been measured at E = 0.1- 2.5 MeV centre-of-mass energy. However, no experimental data is available below 0.1 MeV. Thus, an extrapolation to solar energies is necessary, resulting in significant uncertainty for the extrapolated S-factor. On the other hand, the measured solar neutrino fluxes are now very precise. Therefore, the problem of the S-factor determination is turned around here: Using the measured 7Be and 8B neutrino fluxes and the Standard Solar Model, the 7Be(p , γ)8B astrophysical S-factor is determined at the solar Gamow peak. In addition, the 3He(α , γ)7Be S-factor is redetermined with a similar method.
ANALYSIS OF THE EXTERNAL FACTORS OF INFLUENCE ON INNOVATION ACTIVITY OF AN INDUSTRIAL ENTERPRISE
Directory of Open Access Journals (Sweden)
I. A. Salikov
2014-01-01
Full Text Available Summary. For successful functioning and development of the enterprise is a need to strive as possible deeper and more dynamic influence on parameters and objects OK-environmental management, primarily due to increase their innovation activity. Innovative activity of enterprises influenced by many factors. They can be classified on the factors of direct influence (micro and factors of indirect impacts (macro. Factors of direct impact of the influence on the pace and scale of development of the enterprise, on its effectiveness, because the whole spectrum of these factors acts as a limiter. Macro factors create the General conditions of existence of the enterprise in the external environment. To analyses these factors approach was used to SNW-analysis. As a result of analysis, factors of micro and macro-were classified on: stimulating, it minesweepers and dissuasive. Also studied were the degree of influence of these factors on the innovative activity of the enterprise. Reviewed rating factors hindering the development of innovation activity of industrial enterprise in Russia. In the result of which identified factors that hinder the development of innovative activity, and justified in the direction of overcoming them. It should be noted that the distinction between enabling and constraining factors is rather thin and conditional. So, the factors initially restraining innovation, at a certain point can be transformed into a stimulus for its development. Accounting for these factors, creation of necessary conditions and introduction of innovations in various aspects of the functioning of industrial enterprises will allow them to provide competitor-term benefits and sustainable development in a rapidly changing environment and the external environment.
Energy Technology Data Exchange (ETDEWEB)
Hangsterfer, A.; Driscoll, N.; Kastner, M. [Scripps Inst. of Oceanography, La Jolla, CA (United States). Geosciences Research Division
2008-07-01
Methane hydrates can form within the gas hydrate stability zone (GHSZ) in sea beds. The Gulf of Mexico (GOM) contains an underlying petroleum system and deeply buried, yet dynamic salt deposits. Salt tectonics and fluid expulsion upward through the sediment column result in the formation of fractures, through which high salinity brines migrate into the GHSZ, destabilizing gas hydrates. Thermogenic and biogenic hydrocarbons also migrate to the seafloor along the GOMs northern slope, originating from the thermal and biogenic degradation of organic matter. Gas hydrate occurrence can be controlled by either primary permeability, forming in coarse-grained sediment layers, or by secondary permeability, forming in areas where hydrofracture and faulting generate conduits through which hydrocarbon-saturated fluids flow. This paper presented a study that attempted to determine the relationship between grain-size, permeability, and gas hydrate distribution. Grain-size analyses were performed on cores taken from Keathley Canyon and Atwater Valley in the GOM, on sections of cores that both contained and lacked gas hydrate. Using thermal anomalies as proxies for the occurrence of methane hydrate within the cores, samples of sediment were taken and the grain-size distributions were measured to see if there was a correlation between gas hydrate distribution and grain-size. The paper described the methods, including determination of hydrate occurrence and core analysis. It was concluded that gas hydrate occurrence in Keathley Canyon and Atwater Valley was constrained by secondary permeability and was structurally controlled by hydrofractures and faulting that acted as conduits through which methane-rich fluids flowed. 11 refs., 2 tabs., 5 figs.
Tang, Shuaiqi
Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing
Sharp spatially constrained inversion
DEFF Research Database (Denmark)
Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.
2013-01-01
We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....
Time Series Factor Analysis with an Application to Measuring Money
Gilbert, Paul D.; Meijer, Erik
2005-01-01
Time series factor analysis (TSFA) and its associated statistical theory is developed. Unlike dynamic factor analysis (DFA), TSFA obviates the need for explicitly modeling the process dynamics of the underlying phenomena. It also differs from standard factor analysis (FA) in important respects: the
Housing price forecastability: A factor analysis
DEFF Research Database (Denmark)
Bork, Lasse; Møller, Stig Vinther
of the model stays high at longer horizons. The estimated factors are strongly statistically signi…cant according to a bootstrap resampling method which takes into account that the factors are estimated regressors. The simple three-factor model also contains substantial out-of-sample predictive power...
Erikson Psychosocial Stage Inventory: A Factor Analysis
Gray, Mary McPhail; And Others
1986-01-01
The 72-item Erikson Psychosocial Stage Inventory (EPSI) was factor analyzed for a group of 534 university freshmen and sophomore students. Seven factors emerged, which were labeled Initiative, Industry, Identity, Friendship, Dating, Goal Clarity, and Self-Confidence. Item's representing Erikson's factors, Trust and Autonomy, were dispersed across…
International Nuclear Information System (INIS)
Yap, J.T.; Chen, C.T.; Cooper, M.
1995-01-01
The authors have previously developed a knowledge-based method of factor analysis to analyze dynamic nuclear medicine image sequences. In this paper, the authors analyze dynamic PET cerebral glucose metabolism and neuroreceptor binding studies. These methods have shown the ability to reduce the dimensionality of the data, enhance the image quality of the sequence, and generate meaningful functional images and their corresponding physiological time functions. The new information produced by the factor analysis has now been used to improve the estimation of various physiological parameters. A principal component analysis (PCA) is first performed to identify statistically significant temporal variations and remove the uncorrelated variations (noise) due to Poisson counting statistics. The statistically significant principal components are then used to reconstruct a noise-reduced image sequence as well as provide an initial solution for the factor analysis. Prior knowledge such as the compartmental models or the requirement of positivity and simple structure can be used to constrain the analysis. These constraints are used to rotate the factors to the most physically and physiologically realistic solution. The final result is a small number of time functions (factors) representing the underlying physiological processes and their associated weighting images representing the spatial localization of these functions. Estimation of physiological parameters can then be performed using the noise-reduced image sequence generated from the statistically significant PCs and/or the final factor images and time functions. These results are compared to the parameter estimation using standard methods and the original raw image sequences. Graphical analysis was performed at the pixel level to generate comparable parametric images of the slope and intercept (influx constant and distribution volume)
Burgess, P. M.; Steel, R. J.
2016-12-01
Decoding a history of Earth's surface dynamics from strata requires robust quantitative understanding of supply and accommodation controls. The concept of stratigraphic solution sets has proven useful in this decoding, but application and development of this approach has so far been surprisingly limited. Stratal control volumes, areas and trajectories are new approaches defined here, building on previous ideas about stratigraphic solution sets, to help analyse and understand the sedimentary record of Earth surface dynamics. They may have particular application reconciling results from outcrop and subsurface analysis with results from analogue and numerical experiments. Stratal control volumes are sets of points in a three-dimensional volume, with axes of subsidence, sediment supply and eustatic rates of change, populated with probabilities derived from analysis of subsidence, supply and eustasy timeseries (Figure 1). These empirical probabilities indicate the likelihood of occurrence of any particular combination of control rates defined by any point in the volume. The stratal control volume can then by analysed to determine which parts of the volume represent relative sea-level fall and rise, where in the volume particular stacking patterns will occur, and how probable those stacking patterns are. For outcrop and subsurface analysis, using a stratal control area with eustasy and subsidence combined on a relative sea-level axis allows similar analysis, and may be preferable. A stratal control trajectory is a history of supply and accommodation creation rates, interpreted from outcrop or subsurface data, or observed in analogue and numerical experiments, and plotted as a series of linked points forming a trajectory through the stratal control volume (Figure 1) or area. Three examples are presented, one from outcrop and two theoretical. Much work remains to be done to build a properly representative database of stratal controls, but careful comparison of stratal
Nominal Performance Biosphere Dose Conversion Factor Analysis
International Nuclear Information System (INIS)
Wasiolek, M.
2000-01-01
The purpose of this report was to document the process leading to development of the Biosphere Dose Conversion Factors (BDCFs) for the postclosure nominal performance of the potential repository at Yucca Mountain. BDCF calculations concerned twenty-four radionuclides. This selection included sixteen radionuclides that may be significant nominal performance dose contributors during the compliance period of up to 10,000 years, five additional radionuclides of importance for up to 1 million years postclosure, and three relatively short-lived radionuclides important for the human intrusion scenario. Consideration of radionuclide buildup in soil caused by previous irrigation with contaminated groundwater was taken into account in the BDCF development. The effect of climate evolution, from the current arid conditions to a wetter and cooler climate, on the BDCF values was evaluated. The analysis included consideration of different exposure pathway's contribution to the BDCFs. Calculations of nominal performance BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. BDCFs for the nominal performance, when combined with the concentrations of radionuclides in groundwater allow calculation of potential radiation doses to the receptor of interest. Calculated estimates of radionuclide concentration in groundwater result from the saturated zone modeling. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA) to calculate doses to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain
Disruptive Event Biosphere Doser Conversion Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2000-12-28
The purpose of this report was to document the process leading to, and the results of, development of radionuclide-, exposure scenario-, and ash thickness-specific Biosphere Dose Conversion Factors (BDCFs) for the postulated postclosure extrusive igneous event (volcanic eruption) at Yucca Mountain. BDCF calculations were done for seventeen radionuclides. The selection of radionuclides included those that may be significant dose contributors during the compliance period of up to 10,000 years, as well as radionuclides of importance for up to 1 million years postclosure. The approach documented in this report takes into account human exposure during three different phases at the time of, and after, volcanic eruption. Calculations of disruptive event BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. The pathway analysis included consideration of different exposure pathway's contribution to the BDCFs. BDCFs for volcanic eruption, when combined with the concentration of radioactivity deposited by eruption on the soil surface, allow calculation of potential radiation doses to the receptor of interest. Calculation of radioactivity deposition is outside the scope of this report and so is the transport of contaminated ash from the volcano to the location of the receptor. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA), in which doses are calculated to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain.
Energy Technology Data Exchange (ETDEWEB)
Verde, Licia; Jimenez, Raul [Institute of Cosmos Sciences, University of Barcelona, IEEC-UB, Martí Franquès, 1, E08028 Barcelona (Spain); Bellini, Emilio [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom); Pigozzo, Cassio [Instituto de Física, Universidade Federal da Bahia, Salvador, BA (Brazil); Heavens, Alan F., E-mail: liciaverde@icc.ub.edu, E-mail: emilio.bellini@physics.ox.ac.uk, E-mail: cpigozzo@ufba.br, E-mail: a.heavens@imperial.ac.uk, E-mail: raul.jimenez@icc.ub.edu [Imperial Centre for Inference and Cosmology (ICIC), Imperial College, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom)
2017-04-01
We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the ΛCDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter Ω{sub MR} < 0.006 and extra radiation parameterised as extra effective neutrino species 2.3 < N {sub eff} < 3.2 when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond ΛCDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way that does not depend on late-time Universe assumptions, but depends strongly on early-time physics and in particular on additional components that behave like radiation. We find that the standard ruler length determined in this way is r {sub s} = 147.4 ± 0.7 Mpc if the radiation and neutrino components are standard, but the uncertainty increases by an order of magnitude when non-standard dark radiation components are allowed, to r {sub s} = 150 ± 5 Mpc.
Reflected stochastic differential equation models for constrained animal movement
Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.
2017-01-01
Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.
Liang, En-Wei; Yi, Shuang-Xi; Zhang, Jin; Lü, Hou-Jun; Zhang, Bin-Bin; Zhang, Bing
2010-12-01
The onset of gamma-ray burst (GRB) afterglow is characterized by a smooth bump in the early afterglow light curve as the GRB fireball is decelerated by the circumburst medium. We extensively search for GRBs with such an onset feature in their optical and X-ray light curves from the literature and from the catalog established with the Swift/XRT. Twenty optically selected GRBs and 12 X-ray-selected GRBs are obtained, among which 17 optically selected and 2 X-ray-selected GRBs have redshift measurements. We fit these light curves with a smooth broken power law and measure the width (w), rising timescale (t r), and decaying timescale (t d) at full width at half-maximum. Strong mutual correlations among these timescales and with the peak time (t p) are found. The ratio t r/t d is almost universal among bursts, but the ratio t r/t p varies from 0.3 to ~1. The optical peak luminosity in the R band (L R,p) is anti-correlated with t p and w in the burst frame, indicating a dimmer and broader bump peaking at a later time. The isotropic prompt gamma-ray energy (E γ,iso) is also tightly correlated with L R,p and t p in the burst frame. Assuming that the bumps signal the deceleration of the GRB fireballs in a constant density medium, we calculate the initial Lorentz factor (Γ0) and the deceleration radius (R d) of the GRBs with redshift measurements. The derived Γ0 is typically a few hundreds, and the deceleration radius is R dec ~ 2 × 1017 cm. More intriguingly, a tight correlation between Γ0 and E γ,iso is found, namely Γ0 ~= 182(E γ,iso/1052 erg)0.25. This correlation also applies to the small sample of GRBs which show the signature of the afterglow onset in their X-ray afterglow, and to two bursts (GRBs 990123 and 080319B) whose early optical emission is dominated by a reverse shock. The lower limits of Γ0 derived from a sample of optical afterglow light curves showing a decaying feature from the beginning of the observation are also generally consistent with such
Constraining neutrinoless double beta decay
International Nuclear Information System (INIS)
Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.
2012-01-01
A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.
A Bayesian Nonparametric Approach to Factor Analysis
DEFF Research Database (Denmark)
Piatek, Rémi; Papaspiliopoulos, Omiros
2018-01-01
This paper introduces a new approach for the inference of non-Gaussian factor models based on Bayesian nonparametric methods. It relaxes the usual normality assumption on the latent factors, widely used in practice, which is too restrictive in many settings. Our approach, on the contrary, does no...
Classification analysis of organization factors related to system safety
International Nuclear Information System (INIS)
Liu Huizhen; Zhang Li; Zhang Yuling; Guan Shihua
2009-01-01
This paper analyzes the different types of organization factors which influence the system safety. The organization factor can be divided into the interior organization factor and exterior organization factor. The latter includes the factors of political, economical, technical, law, social culture and geographical, and the relationships among different interest groups. The former includes organization culture, communication, decision, training, process, supervision and management and organization structure. This paper focuses on the description of the organization factors. The classification analysis of the organization factors is the early work of quantitative analysis. (authors)
Hyperbolicity and constrained evolution in linearized gravity
International Nuclear Information System (INIS)
Matzner, Richard A.
2005-01-01
Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them (unconstrained evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly 'more correct' to introduce a scheme which actively maintains the constraints by solution (constrained evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with the highest derivative order in the equations, linearization loses no essential details.) We then give explicit analytical construction of the effect of initial data setting and constrained evolution for linearized gravitational waves. While this is clearly a toy model with regard to constrained evolution, certain interesting features are found which have relevance to the full nonlinear Einstein equations
Using BMDP and SPSS for a Q factor analysis.
Tanner, B A; Koning, S M
1980-12-01
While Euclidean distances and Q factor analysis may sometimes be preferred to correlation coefficients and cluster analysis for developing a typology, commercially available software does not always facilitate their use. Commands are provided for using BMDP and SPSS in a Q factor analysis with Euclidean distances.
EXPLORATORY FACTOR ANALYSIS (EFA IN CONSUMER BEHAVIOR AND MARKETING RESEARCH
Directory of Open Access Journals (Sweden)
Marcos Pascual Soler
2012-06-01
Full Text Available Exploratory Factor Analysis (EFA is one of the most widely used statistical procedures in social research. The main objective of this work is to describe the most common practices used by researchers in the consumer behavior and marketing area. Through a literature review methodology the practices of AFE in five consumer behavior and marketing journals(2000-2010 were analyzed. Then, the choices made by the researchers concerning factor model, retention criteria, rotation, factors interpretation and other relevant issues to factor analysis were analized. The results suggest that researchers routinely conduct analyses using such questionable methods. Suggestions for improving the use of factor analysis and the reporting of results are presented and a checklist (Exploratory Factor Analysis Checklist, EFAC is provided to help editors, reviewers, and authors improve reporting exploratory factor analysis.
Factor analysis of serogroups botanica and aurisina of Leptospira biflexa.
Cinco, M
1977-11-01
Factor analysis is performed on serovars of Botanica and Aurisina serogroup of Leptospira biflexa. The results show the arrangement of main factors serovar and serogroup specific, as well as the antigens common with serovars of heterologous serogroups.
Human factors analysis of incident/accident report
International Nuclear Information System (INIS)
Kuroda, Isao
1992-01-01
Human factors analysis of accident/incident has different kinds of difficulties in not only technical, but also psychosocial background. This report introduces some experiments of 'Variation diagram method' which is able to extend to operational and managemental factors. (author)
Nonparametric factor analysis of time series
Rodríguez-Poo, Juan M.; Linton, Oliver Bruce
1998-01-01
We introduce a nonparametric smoothing procedure for nonparametric factor analaysis of multivariate time series. The asymptotic properties of the proposed procedures are derived. We present an application based on the residuals from the Fair macromodel.
Analysis of success factors in advertising
Fedorchak, Oleksiy; Kedebecz, Kristina
2017-01-01
The essence of factors of the success of advertising campaigns is investigated. The stages of conducting and stages of evaluation of the effectiveness of advertising campaigns are determined. Also defined goals and objectives of advertising campaigns.
Holographic analysis of diffraction structure factors
International Nuclear Information System (INIS)
Marchesini, S.; Bucher, J.J.; Shuh, D.K.; Fabris, L.; Press, M.J.; West, M.W.; Hussain, Z.; Mannella, N.; Fadley, C.S.; Van Hove, M.A.; Stolte, W.C.
2002-01-01
We combine the theory of inside-source/inside-detector x-ray fluorescence holography and Kossel lines/ x ray standing waves in kinematic approximation to directly obtain the phases of the diffraction structure factors. The influence of Kossel lines and standing waves on holography is also discussed. We obtain partial phase determination from experimental data obtaining the sign of the real part of the structure factor for several reciprocal lattice vectors of a vanadium crystal
Identification of noise in linear data sets by factor analysis
International Nuclear Information System (INIS)
Roscoe, B.A.; Hopke, Ph.K.
1982-01-01
A technique which has the ability to identify bad data points, after the data has been generated, is classical factor analysis. The ability of classical factor analysis to identify two different types of data errors make it ideally suited for scanning large data sets. Since the results yielded by factor analysis indicate correlations between parameters, one must know something about the nature of the data set and the analytical techniques used to obtain it to confidentially isolate errors. (author)
Exploring Technostress: Results of a Large Sample Factor Analysis
Jonušauskas, Steponas; Raišienė, Agota Giedrė
2016-01-01
With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...
Constrained evolution in numerical relativity
Anderson, Matthew William
The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.
Analysis of Increased Information Technology Outsourcing Factors
Directory of Open Access Journals (Sweden)
Brcar Franc
2013-01-01
Full Text Available The study explores the field of IT outsourcing. The narrow field of research is to build a model of IT outsourcing based on influential factors. The purpose of this research is to determine the influential factors on IT outsourcing expansion. A survey was conducted with 141 large-sized Slovenian companies. Data were statistically analyzed using binary logistic regression. The final model contains five factors: (1 management’s support; (2 knowledge on IT outsourcing; (3 improvement of efficiency and effectiveness; (4 quality improvement of IT services; and (5 innovation improvement of IT. Managers immediately can use the results of this research in their decision-making. Increased performance of each individual organization is to the benefit of the entire society. The examination of IT outsourcing with the methods used is the first such research in Slovenia.
Warranty claim analysis considering human factors
International Nuclear Information System (INIS)
Wu Shaomin
2011-01-01
Warranty claims are not always due to product failures. They can also be caused by two types of human factors. On the one hand, consumers might claim warranty due to misuse and/or failures caused by various human factors. Such claims might account for more than 10% of all reported claims. On the other hand, consumers might not be bothered to claim warranty for failed items that are still under warranty, or they may claim warranty after they have experienced several intermittent failures. These two types of human factors can affect warranty claim costs. However, research in this area has received rather little attention. In this paper, we propose three models to estimate the expected warranty cost when the two types of human factors are included. We consider two types of failures: intermittent and fatal failures, which might result in different claim patterns. Consumers might report claims after a fatal failure has occurred, and upon intermittent failures they might report claims after a number of failures have occurred. Numerical examples are given to validate the results derived.
Chiral analysis of baryon form factors
Energy Technology Data Exchange (ETDEWEB)
Gail, T.A.
2007-11-08
This work presents an extensive theoretical investigation of the structure of the nucleon within the standard model of elementary particle physics. In particular, the long range contributions to a number of various form factors parametrizing the interactions of the nucleon with an electromagnetic probe are calculated. The theoretical framework for those calculations is chiral perturbation theory, the exact low energy limit of Quantum Chromo Dynamics, which describes such long range contributions in terms of a pion-cloud. In this theory, a nonrelativistic leading one loop order calculation of the form factors parametrizing the vector transition of a nucleon to its lowest lying resonance, the {delta}, a covariant calculation of the isovector and isoscalar vector form factors of the nucleon at next to leading one loop order and a covariant calculation of the isoscalar and isovector generalized vector form factors of the nucleon at leading one loop order are performed. In order to perform consistent loop calculations in the covariant formulation of chiral perturbation theory an appropriate renormalization scheme is defined in this work. All theoretical predictions are compared to phenomenology and results from lattice QCD simulations. These comparisons allow for a determination of the low energy constants of the theory. Furthermore, the possibility of chiral extrapolation, i.e. the extrapolation of lattice data from simulations at large pion masses down to the small physical pion mass is studied in detail. Statistical as well as systematic uncertainties are estimated for all results throughout this work. (orig.)
Assessing Heterogeneity for Factor Analysis Model with Continuous and Ordinal Outcomes
Directory of Open Access Journals (Sweden)
Ye-Mao Xia
2016-01-01
Full Text Available Factor analysis models with continuous and ordinal responses are a useful tool for assessing relations between the latent variables and mixed observed responses. These models have been successfully applied to many different fields, including behavioral, educational, and social-psychological sciences. However, within the Bayesian analysis framework, most developments are constrained within parametric families, of which the particular distributions are specified for the parameters of interest. This leads to difficulty in dealing with outliers and/or distribution deviations. In this paper, we propose a Bayesian semiparametric modeling for factor analysis model with continuous and ordinal variables. A truncated stick-breaking prior is used to model the distributions of the intercept and/or covariance structural parameters. Bayesian posterior analysis is carried out through the simulation-based method. Blocked Gibbs sampler is implemented to draw observations from the complicated posterior. For model selection, the logarithm of pseudomarginal likelihood is developed to compare the competing models. Empirical results are presented to illustrate the application of the methodology.
Regression analysis of nuclear plant capacity factors
International Nuclear Information System (INIS)
Stocks, K.J.; Faulkner, J.I.
1980-07-01
Operating data on all commercial nuclear power plants of the PWR, HWR, BWR and GCR types in the Western World are analysed statistically to determine whether the explanatory variables size, year of operation, vintage and reactor supplier are significant in accounting for the variation in capacity factor. The results are compared with a number of previous studies which analysed only United States reactors. The possibility of specification errors affecting the results is also examined. Although, in general, the variables considered are statistically significant, they explain only a small portion of the variation in the capacity factor. The equations thus obtained should certainly not be used to predict the lifetime performance of future large reactors
An Empirical Analysis of Job Satisfaction Factors.
1987-09-01
have acknowledged the importance of factors which make the Air Force attractive to its members or conversely, make other employees consider...Maslow’s need hierarchy theory attempts to show that man has five basic categories of needs: physiological, safety, belongingness , esteem, and self...attained until lower-level basic needs are attained. This implies a sort of growth process where optional job environments for given employees are
Lightweight cryptography for constrained devices
DEFF Research Database (Denmark)
Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco
2014-01-01
Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....
Wavelet library for constrained devices
Ehlers, Johan Hendrik; Jassim, Sabah A.
2007-04-01
The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.
Self-constrained inversion of potential fields
Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.
2013-11-01
We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.
A Factor Analysis of the BSRI and the PAQ.
Edwards, Teresa A.; And Others
Factor analysis of the Bem Sex Role Inventory (BSRI) and the Personality Attributes Questionnaire (PAQ) was undertaken to study the independence of the masculine and feminine scales within each instrument. Both instruments were administered to undergraduate education majors. Analysis of primary first and second order factors of the BSRI indicated…
Analysis and optimization of the TWINKLE factoring device
Lenstra, A.K.; Shamir, A.; Preneel, B.
2000-01-01
We describe an enhanced version of the TWINKLE factoring device and analyse to what extent it can be expected to speed up the sieving step of the Quadratic Sieve and Number Field Sieve factoring al- gorithms. The bottom line of our analysis is that the TWINKLE-assisted factorization of 768-bit
Hierarchical Factoring Based On Image Analysis And Orthoblique Rotations.
Stankov, L
1979-07-01
The procedure for hierarchical factoring suggested by Schmid and Leiman (1957) is applied within the framework of image analysis and orthoblique rotational procedures. It is shown that this approach necessarily leads to correlated higher order factors. Also, one can obtain a smaller number of factors than produced by typical hierarchical procedures.
KINETIC CONSEQUENCES OF CONSTRAINING RUNNING BEHAVIOR
Directory of Open Access Journals (Sweden)
John A. Mercer
2005-06-01
Full Text Available It is known that impact forces increase with running velocity as well as when stride length increases. Since stride length naturally changes with changes in submaximal running velocity, it was not clear which factor, running velocity or stride length, played a critical role in determining impact characteristics. The aim of the study was to investigate whether or not stride length influences the relationship between running velocity and impact characteristics. Eight volunteers (mass=72.4 ± 8.9 kg; height = 1.7 ± 0.1 m; age = 25 ± 3.4 years completed two running conditions: preferred stride length (PSL and stride length constrained at 2.5 m (SL2.5. During each condition, participants ran at a variety of speeds with the intent that the range of speeds would be similar between conditions. During PSL, participants were given no instructions regarding stride length. During SL2.5, participants were required to strike targets placed on the floor that resulted in a stride length of 2.5 m. Ground reaction forces were recorded (1080 Hz as well as leg and head accelerations (uni-axial accelerometers. Impact force and impact attenuation (calculated as the ratio of head and leg impact accelerations were recorded for each running trial. Scatter plots were generated plotting each parameter against running velocity. Lines of best fit were calculated with the slopes recorded for analysis. The slopes were compared between conditions using paired t-tests. Data from two subjects were dropped from analysis since the velocity ranges were not similar between conditions resulting in the analysis of six subjects. The slope of impact force vs. velocity relationship was different between conditions (PSL: 0.178 ± 0.16 BW/m·s-1; SL2.5: -0.003 ± 0.14 BW/m·s-1; p < 0.05. The slope of the impact attenuation vs. velocity relationship was different between conditions (PSL: 5.12 ± 2.88 %/m·s-1; SL2.5: 1.39 ± 1.51 %/m·s-1; p < 0.05. Stride length was an important factor
Modification and analysis of engineering hot spot factor of HFETR
International Nuclear Information System (INIS)
Hu Yuechun; Deng Caiyu; Li Haitao; Xu Taozhong; Mo Zhengyu
2014-01-01
This paper presents the modification and analysis of engineering hot spot factors of HFETR. The new factors are applied in the fuel temperature analysis and the estimated value of the safety allowable operating power of HFETR. The result shows the maximum cladding temperature of the fuel is lower when the new factor are in utilization, and the safety allowable operating power of HFETR if higher, thus providing the economical efficiency of HFETR. (authors)
A replication of a factor analysis of motivations for trapping
Schroeder, Susan; Fulton, David C.
2015-01-01
Using a 2013 sample of Minnesota trappers, we employed confirmatory factor analysis to replicate an exploratory factor analysis of trapping motivations conducted by Daigle, Muth, Zwick, and Glass (1998). We employed the same 25 items used by Daigle et al. and tested the same five-factor structure using a recent sample of Minnesota trappers. We also compared motivations in our sample to those reported by Daigle et el.
Factor analysis improves the selection of prescribing indicators
DEFF Research Database (Denmark)
Rasmussen, Hanne Marie Skyggedal; Søndergaard, Jens; Sokolowski, Ineta
2006-01-01
OBJECTIVE: To test a method for improving the selection of indicators of general practitioners' prescribing. METHODS: We conducted a prescription database study including all 180 general practices in the County of Funen, Denmark, approximately 472,000 inhabitants. Principal factor analysis was us...... appropriate and inappropriate prescribing, as revealed by the correlation of the indicators in the first factor. CONCLUSION: Correlation and factor analysis is a feasible method that assists the selection of indicators and gives better insight into prescribing patterns....
International Nuclear Information System (INIS)
Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng
2014-01-01
A quantitative local computed tomography combined with data-constrained modelling has been developed. The method could improve distinctly the spatial resolution and the composition resolution in a sample larger than the field of view, for quantitative characterization of three-dimensional distributions of material compositions and void. Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials
Human factor analysis and preventive countermeasures in nuclear power plant
International Nuclear Information System (INIS)
Li Ye
2010-01-01
Based on the human error analysis theory and the characteristics of maintenance in a nuclear power plant, human factors of maintenance in NPP are divided into three different areas: human, technology, and organization. Which is defined as individual factors, including psychological factors, physiological characteristics, health status, level of knowledge and interpersonal skills; The technical factors including technology, equipment, tools, working order, etc.; The organizational factors including management, information exchange, education, working environment, team building and leadership management,etc The analysis found that organizational factors can directly or indirectly affect the behavior of staff and technical factors, is the most basic human error factor. Based on this nuclear power plant to reduce human error and measures the response. (authors)
ANALYSIS OF RISK FACTORS ECTOPIC PREGNANCY
Directory of Open Access Journals (Sweden)
Budi Santoso
2017-04-01
Full Text Available Introduction: Ectopic pregnancy is a pregnancy with extrauterine implantation. This situation is gynecologic emergency that contributes to maternal mortality. Therefore, early recognition, based on identification of the causes of ectopic pregnancy risk factors, is needed. Methods: The design descriptive observational. The samples were pregnant women who had ectopic pregnancy at Maternity Room, Emergency Unit, Dr. Soetomo Hospital, Surabaya, from 1 July 2008 to 1 July 2010. Sampling technique was total sampling using medical records. Result: Patients with ectopic pregnancy were 99 individuals out of 2090 pregnant women who searched for treatment in Dr. Soetomo Hospital. However, only 29 patients were accompanied with traceable risk factors. Discussion:. Most ectopic pregnancies were in the age group of 26-30 years, comprising 32 patients (32.32%, then in age groups of 31–35 years as many as 25 patients (25.25%, 18 patients in age group 21–25 years (18.18%, 17 patients in age group 36–40 years (17.17%, 4 patients in age group 41 years and more (4.04%, and the least was in age group of 16–20 years with 3 patients (3.03%. A total of 12 patients with ectopic pregnancy (41.38% had experience of abortion and 6 patients (20.69% each in groups of patients with ectopic pregnancy who used family planning, in those who used family planning as well as ectopic pregnancy patients with history of surgery. There were 2 patients (6.90% of the group of patients ectopic pregnancy who had history of surgery and history of abortion. The incidence rate of ectopic pregnancy was 4.73%, mostly in the second gravidity (34.34%, whereas the nulliparous have the highest prevalence of 39.39%. Acquired risk factors, i.e. history of operations was 10.34%, patients with family planning 20.69%, patients with history of abortion 41.38%, patients with history of abortion and operation 6.90% patients with family and history of abortion was 20.69%.
Investigating product development strategy in beverage industry using factor analysis
Directory of Open Access Journals (Sweden)
Naser Azad
2013-03-01
Full Text Available Selecting a product development strategy that is associated with the company's current service or product innovation, based on customers’ needs and changing environment, plays an important role in increasing demand, increasing market share, increasing sales and profits. Therefore, it is important to extract effective variables associated with product development to improve performance measurement of firms. This paper investigates important factors influencing product development strategies using factor analysis. The proposed model of this paper investigates 36 factors and, using factor analysis, we extract six most influential factors including information sharing, intelligence information, exposure strategy, differentiation, research and development strategy and market survey. The first strategy, partnership, includes five sub-factor including product development partnership, partnership with foreign firms, customers’ perception from competitors’ products, Customer involvement in product development, inter-agency coordination, customer-oriented approach to innovation and transmission of product development change where inter-agency coordination has been considered the most important factor. Internal strengths are the most influential factors impacting the second strategy, intelligence information. The third factor, introducing strategy, introducing strategy, includes four sub criteria and consumer buying behavior is the most influencing factor. Differentiation is the next important factor with five components where knowledge and expertise in product innovation is the most important one. Research and development strategy with four sub-criteria where reducing product development cycle plays the most influential factor and finally, market survey strategy is the last important factor with three factors and finding new market plays the most important role.
Housing price forecastability: A factor analysis
DEFF Research Database (Denmark)
Møller, Stig Vinther; Bork, Lasse
2017-01-01
We examine U.S. housing price forecastability using principal component analysis (PCA), partial least squares (PLS), and sparse PLS (SPLS). We incorporate information from a large panel of 128 economic time series and show that macroeconomic fundamentals have strong predictive power for future...... movements in housing prices. We find that (S)PLS models systematically dominate PCA models. (S)PLS models also generate significant out-of-sample predictive power over and above the predictive power contained by the price-rent ratio, autoregressive benchmarks, and regression models based on small datasets....
Factoring handedness data: I. Item analysis.
Messinger, H B; Messinger, M I
1995-12-01
Recently in this journal Peters and Murphy challenged the validity of factor analyses done on bimodal handedness data, suggesting instead that right- and left-handers be studied separately. But bimodality may be avoidable if attention is paid to Oldfield's questionnaire format and instructions for the subjects. Two characteristics appear crucial: a two-column LEFT-RIGHT format for the body of the instrument and what we call Oldfield's Admonition: not to indicate strong preference for handedness item, such as write, unless "... the preference is so strong that you would never try to use the other hand unless absolutely forced to...". Attaining unimodality of an item distribution would seem to overcome the objections of Peters and Murphy. In a 1984 survey in Boston we used Oldfield's ten-item questionnaire exactly as published. This produced unimodal item distributions. With reflection of the five-point item scale and a logarithmic transformation, we achieved a degree of normalization for the items. Two surveys elsewhere based on Oldfield's 20-item list but with changes in the questionnaire format and the instructions, yielded markedly different item distributions with peaks at each extreme and sometimes in the middle as well.
Constrained multi-degree reduction with respect to Jacobi norms
Ait-Haddou, Rachid; Barton, Michael
2015-01-01
We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.
Constrained multi-degree reduction with respect to Jacobi norms
Ait-Haddou, Rachid
2015-12-31
We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.
Constraining the mass of the Local Group
Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan
2017-03-01
The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.
Cosmicflows Constrained Local UniversE Simulations
Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo
2016-01-01
This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, I.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.
Exploring Technostress: Results of a Large Sample Factor Analysis
Directory of Open Access Journals (Sweden)
Steponas Jonušauskas
2016-06-01
Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.
Economic Analysis of Factors Affecting Technical Efficiency of ...
African Journals Online (AJOL)
Economic Analysis of Factors Affecting Technical Efficiency of Smallholders ... socio-economic characteristics which influence technical efficiency in maize production. ... Ministry of Agriculture and livestock, records, books, reports and internet.
Text mining factor analysis (TFA) in green tea patent data
Rahmawati, Sela; Suprijadi, Jadi; Zulhanif
2017-03-01
Factor analysis has become one of the most widely used multivariate statistical procedures in applied research endeavors across a multitude of domains. There are two main types of analyses based on factor analysis: Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA). Both EFA and CFA aim to observed relationships among a group of indicators with a latent variable, but they differ fundamentally, a priori and restrictions made to the factor model. This method will be applied to patent data technology sector green tea to determine the development technology of green tea in the world. Patent analysis is useful in identifying the future technological trends in a specific field of technology. Database patent are obtained from agency European Patent Organization (EPO). In this paper, CFA model will be applied to the nominal data, which obtain from the presence absence matrix. While doing processing, analysis CFA for nominal data analysis was based on Tetrachoric matrix. Meanwhile, EFA model will be applied on a title from sector technology dominant. Title will be pre-processing first using text mining analysis.
Sustainable Manufacturing Practices in Malaysian Automotive Industry: Confirmatory Factor Analysis
Habidin, Nurul Fadly; Zubir, Anis Fadzlin Mohd; Fuz, Nursyazwani Mohd; Latip, Nor Azrin Md; Azman, Mohamed Nor Azhari
2015-01-01
Sustainable manufacturing practices (SMPs) have received enormous attention in current years as an effective solution to support the continuous growth and expansion of the automotive manufacturing industry. This reported study was conducted to examine confirmatory factor analysis for SMP such as manufacturing process, supply chain management, social responsibility, and environmental management based on automotive manufacturing industry. The results of confirmatory factor analysis show that fo...
An Analysis of Construction Accident Factors Based on Bayesian Network
Yunsheng Zhao; Jinyong Pei
2013-01-01
In this study, we have an analysis of construction accident factors based on bayesian network. Firstly, accidents cases are analyzed to build Fault Tree method, which is available to find all the factors causing the accidents, then qualitatively and quantitatively analyzes the factors with Bayesian network method, finally determines the safety management program to guide the safety operations. The results of this study show that bad condition of geological environment has the largest posterio...
The Recoverability of P-Technique Factor Analysis
Molenaar, Peter C. M.; Nesselroade, John R.
2009-01-01
It seems that just when we are about to lay P-technique factor analysis finally to rest as obsolete because of newer, more sophisticated multivariate time-series models using latent variables--dynamic factor models--it rears its head to inform us that an obituary may be premature. We present the results of some simulations demonstrating that even…
Likelihood-based Dynamic Factor Analysis for Measurement and Forecasting
Jungbacker, B.M.J.P.; Koopman, S.J.
2015-01-01
We present new results for the likelihood-based analysis of the dynamic factor model. The latent factors are modelled by linear dynamic stochastic processes. The idiosyncratic disturbance series are specified as autoregressive processes with mutually correlated innovations. The new results lead to
International Nuclear Information System (INIS)
Hirota, Kazuyoshi; Ikuno, Yoshiyasu; Nishikimi, Toshio
1986-01-01
Factor analysis was applied to multigated cardiac pool scintigraphy to evaluate its ability to detect left ventricular wall motion abnormalities in 35 patients with old myocardial infarction (MI), and in 12 control cases with normal left ventriculography. All cases were also evaluated by conventional Fourier analysis. In most cases with normal left ventriculography, the ventricular and atrial factors were extracted by factor analysis. In cases with MI, the third factor was obtained in the left ventricle corresponding to wall motion abnormality. Each case was scored according to the coincidence of findings of ventriculography and those of factor analysis or Fourier analysis. Scores were recorded for three items; the existence, location, and degree of asynergy. In cases of MI, the detection rate of asynergy was 94 % by factor analysis, 83 % by Fourier analysis, and the agreement in respect to location was 71 % and 66 %, respectively. Factor analysis had higher scores than Fourier analysis, but this was not significant. The interobserver error of factor analysis was less than that of Fourier analysis. Factor analysis can display locations and dynamic motion curves of asynergy, and it is regarded as a useful method for detecting and evaluating left ventricular wall motion abnormalities. (author)
Constraining walking and custodial technicolor
DEFF Research Database (Denmark)
Foadi, Roshan; Frandsen, Mads Toudal; Sannino, Francesco
2008-01-01
We show how to constrain the physical spectrum of walking technicolor models via precision measurements and modified Weinberg sum rules. We also study models possessing a custodial symmetry for the S parameter at the effective Lagrangian level-custodial technicolor-and argue that these models...
Analysis of related risk factors for pancreatic fistula after pancreaticoduodenectomy
Directory of Open Access Journals (Sweden)
Qi-Song Yu
2016-08-01
Full Text Available Objective: To explore the related risk factors for pancreatic fistula after pancreaticoduodenectomy to provide a theoretical evidence for effectively preventing the occurrence of pancreatic fistula. Methods: A total of 100 patients who were admitted in our hospital from January, 2012 to January, 2015 and had performed pancreaticoduodenectomy were included in the study. The related risk factors for developing pancreatic fistula were collected for single factor and Logistic multi-factor analysis. Results: Among the included patients, 16 had pancreatic fistula, and the total occurrence rate was 16% (16/100. The single-factor analysis showed that the upper abdominal operation history, preoperative bilirubin, pancreatic texture, pancreatic duct diameter, intraoperative amount of bleeding, postoperative hemoglobin, and application of somatostatin after operation were the risk factors for developing pancreatic fistula (P<0.05. The multi-factor analysis showed that the upper abdominal operation history, the soft pancreatic texture, small pancreatic duct diameter, and low postoperative hemoglobin were the dependent risk factors for developing pancreatic fistula (OR=4.162, 6.104, 5.613, 4.034, P<0.05. Conclusions: The occurrence of pancreatic fistula after pancreaticoduodenectomy is closely associated with the upper abdominal operation history, the soft pancreatic texture, small pancreatic duct diameter, and low postoperative hemoglobin; therefore, effective measures should be taken to reduce the occurrence of pancreatic fistula according to the patients’ own conditions.
Environmental Performance in Countries Worldwide: Determinant Factors and Multivariate Analysis
Directory of Open Access Journals (Sweden)
Isabel Gallego-Alvarez
2014-11-01
Full Text Available The aim of this study is to analyze the environmental performance of countries and the variables that can influence it. At the same time, we performed a multivariate analysis using the HJ-biplot, an exploratory method that looks for hidden patterns in the data, obtained from the usual singular value decomposition (SVD of the data matrix, to contextualize the countries grouped by geographical areas and the variables relating to environmental indicators included in the environmental performance index. The sample used comprises 149 countries of different geographic areas. The findings obtained from the empirical analysis emphasize that socioeconomic factors, such as economic wealth and education, as well as institutional factors represented by the style of public administration, in particular control of corruption, are determinant factors of environmental performance in the countries analyzed. In contrast, no effect on environmental performance was found for factors relating to the internal characteristics of a country or political factors.
Confirmatory factor analysis applied to the Force Concept Inventory
Eaton, Philip; Willoughby, Shannon D.
2018-06-01
In 1995, Huffman and Heller used exploratory factor analysis to draw into question the factors of the Force Concept Inventory (FCI). Since then several papers have been published examining the factors of the FCI on larger sets of student responses and understandable factors were extracted as a result. However, none of these proposed factor models have been verified to not be unique to their original sample through the use of independent sets of data. This paper seeks to confirm the factor models proposed by Scott et al. in 2012, and Hestenes et al. in 1992, as well as another expert model proposed within this study through the use of confirmatory factor analysis (CFA) and a sample of 20 822 postinstruction student responses to the FCI. Upon application of CFA using the full sample, all three models were found to fit the data with acceptable global fit statistics. However, when CFA was performed using these models on smaller sample sizes the models proposed by Scott et al. and Eaton and Willoughby were found to be far more stable than the model proposed by Hestenes et al. The goodness of fit of these models to the data suggests that the FCI can be scored on factors that are not unique to a single class. These scores could then be used to comment on how instruction methods effect the performance of students along a single factor and more in-depth analyses of curriculum changes may be possible as a result.
Confirmatory factor analysis of the female sexual function index.
Opperman, Emily A; Benson, Lindsay E; Milhausen, Robin R
2013-01-01
The Female Sexual Functioning Index (Rosen et al., 2000 ) was designed to assess the key dimensions of female sexual functioning using six domains: desire, arousal, lubrication, orgasm, satisfaction, and pain. A full-scale score was proposed to represent women's overall sexual function. The fifth revision to the Diagnostic and Statistical Manual (DSM) is currently underway and includes a proposal to combine desire and arousal problems. The objective of this article was to evaluate and compare four models of the Female Sexual Functioning Index: (a) single-factor model, (b) six-factor model, (c) second-order factor model, and (4) five-factor model combining the desire and arousal subscales. Cross-sectional and observational data from 85 women were used to conduct a confirmatory factor analysis on the Female Sexual Functioning Index. Local and global goodness-of-fit measures, the chi-square test of differences, squared multiple correlations, and regression weights were used. The single-factor model fit was not acceptable. The original six-factor model was confirmed, and good model fit was found for the second-order and five-factor models. Delta chi-square tests of differences supported best fit for the six-factor model validating usage of the six domains. However, when revisions are made to the DSM-5, the Female Sexual Functioning Index can adapt to reflect these changes and remain a valid assessment tool for women's sexual functioning, as the five-factor structure was also supported.
Trends in PDE constrained optimization
Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan
2014-01-01
Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics. The book is divided into five sections on “Constrained Optimization, Identification and Control”...
Clinicopathological Analysis of Factors Related to Colorectal Tumor Perforation
Medina-Arana, Vicente; Martínez-Riera, Antonio; Delgado-Plasencia, Luciano; Rodríguez-González, Diana; Bravo-Gutiérrez, Alberto; Álvarez-Argüelles, Hugo; Alarcó-Hernández, Antonio; Salido-Ruiz, Eduardo; Fernández-Peralta, Antonia M.; González-Aguilera, Juan J.
2015-01-01
Abstract Colorectal tumor perforation is a life-threatening complication of this disease. However, little is known about the anatomopathological factors or pathophysiologic mechanisms involved. Pathological and immunohistochemical analysis of factors related with tumoral neo-angiogenesis, which could influence tumor perforation are assessed in this study. A retrospective study of patients with perforated colon tumors (Group P) and T4a nonperforated (controls) was conducted between 2001 and 20...
Analysis of Key Factors Driving Japan’s Military Normalization
2017-09-01
no change to our policy of not giving in to terrorism.”40 Though the prime minister was democratically supported, Koizumi’s leadership style took...of the key driving factors of Japan’s normalization. The areas of prime ministerial leadership , regional security threats, alliance issues, and...analysis of the key driving factors of Japan’s normalization. The areas of prime ministerial leadership , regional security threats, alliance issues, and
Capital Cost Optimization for Prefabrication: A Factor Analysis Evaluation Model
Directory of Open Access Journals (Sweden)
Hong Xue
2018-01-01
Full Text Available High capital cost is a significant hindrance to the promotion of prefabrication. In order to optimize cost management and reduce capital cost, this study aims to explore the latent factors and factor analysis evaluation model. Semi-structured interviews were conducted to explore potential variables and then questionnaire survey was employed to collect professionals’ views on their effects. After data collection, exploratory factor analysis was adopted to explore the latent factors. Seven latent factors were identified, including “Management Index”, “Construction Dissipation Index”, “Productivity Index”, “Design Efficiency Index”, “Transport Dissipation Index”, “Material increment Index” and “Depreciation amortization Index”. With these latent factors, a factor analysis evaluation model (FAEM, divided into factor analysis model (FAM and comprehensive evaluation model (CEM, was established. The FAM was used to explore the effect of observed variables on the high capital cost of prefabrication, while the CEM was used to evaluate comprehensive cost management level on prefabrication projects. Case studies were conducted to verify the models. The results revealed that collaborative management had a positive effect on capital cost of prefabrication. Material increment costs and labor costs had significant impacts on production cost. This study demonstrated the potential of on-site management and standardization design to reduce capital cost. Hence, collaborative management is necessary for cost management of prefabrication. Innovation and detailed design were needed to improve cost performance. The new form of precast component factories can be explored to reduce transportation cost. Meanwhile, targeted strategies can be adopted for different prefabrication projects. The findings optimized the capital cost and improved the cost performance through providing an evaluation and optimization model, which helps managers to
Factor analysis of the contextual fine motor questionnaire in children.
Lin, Chin-Kai; Meng, Ling-Fu; Yu, Ya-Wen; Chen, Che-Kuo; Li, Kuan-Hua
2014-02-01
Most studies treat fine motor as one subscale in a developmental test, hence, further factor analysis of fine motor has not been conducted. In fact, fine motor has been treated as a multi-dimensional domain from both clinical and theoretical perspectives, and therefore to know its factors would be valuable. The aim of this study is to analyze the internal consistency and factor validity of the Contextual Fine Motor Questionnaire (CFMQ). Based on the ecological observation and literature, the Contextual Fine Motor Questionnaire (CFMQ) was developed and includes 5 subscales: Pen Control, Tool Use During Handicraft Activities, the Use of Dining Utensils, Connecting and Separating during Dressing and Undressing, and Opening Containers. The main purpose of this study is to establish the factorial validity of the CFMQ through conducting this factor analysis study. Among 1208 questionnaires, 904 were successfully completed. Data from the children's CFMQ submitted by primary care providers was analyzed, including 485 females (53.6%) and 419 males (46.4%) from grades 1 to 5, ranging in age from 82 to 167 months (M=113.9, SD=16.3). Cronbach's alpha was used to measure internal consistency and explorative factor analysis was applied to test the five factor structures within the CFMQ. Results showed that Cronbach's alpha coefficient of the CFMQ for 5 subscales ranged from .77 to .92 and all item-total correlations with corresponding subscales were larger than .4 except one item. The factor loading of almost all items classified to their factor was larger than .5 except 3 items. There were five factors, explaining a total of 62.59% variance for the CFMQ. In conclusion, the remaining 24 items in the 5 subscales of the CFMQ had appropriate internal consistency, test-retest reliability and construct validity. Copyright © 2013 Elsevier Ltd. All rights reserved.
Using Factor Analysis to Identify Topic Preferences Within MBA Courses
Directory of Open Access Journals (Sweden)
Earl Chrysler
2003-02-01
Full Text Available This study demonstrates the role of a principal components factor analysis in conducting a gap analysis as to the desired characteristics of business alumni. Typically, gap analyses merely compare the emphases that should be given to areas of inquiry with perceptions of actual emphases. As a result, the focus is upon depth of coverage. A neglected area in need of investigation is the breadth of topic dimensions and their differences between the normative (should offer and the descriptive (actually offer. The implications of factor structures, as well as traditional gap analyses, are developed and discussed in the context of outcomes assessment.
Analysis of IFR driver fuel hot channel factors
International Nuclear Information System (INIS)
Ku, J.Y.; Chang, L.K.; Mohr, D.
1994-01-01
Thermal-hydraulic uncertainty factors for Integral Fast Reactor (IFR) driver fuels have been determined based primarily on the database obtained from the predecessor fuels used in the IFR prototype, Experimental Breeder Reactor II. The uncertainty factors were applied to the channel factors (HCFs) analyses to obtain separate overall HCFs for fuel and cladding for steady-state analyses. A ''semistatistical horizontal method'' was used in the HCFs analyses. The uncertainty factor of the fuel thermal conductivity dominates the effects considered in the HCFs analysis; the uncertainty in fuel thermal conductivity will be reduced as more data are obtained to expand the currently limited database for the IFR ternary metal fuel (U-20Pu-10Zr). A set of uncertainty factors to be used for transient analyses has also been derived
Interactive analysis of human error factors in NPP operation events
International Nuclear Information System (INIS)
Zhang Li; Zou Yanhua; Huang Weigang
2010-01-01
Interactive of human error factors in NPP operation events were introduced, and 645 WANO operation event reports from 1999 to 2008 were analyzed, among which 432 were found relative to human errors. After classifying these errors with the Root Causes or Causal Factors, and then applying SPSS for correlation analysis,we concluded: (1) Personnel work practices are restricted by many factors. Forming a good personnel work practices is a systematic work which need supports in many aspects. (2)Verbal communications,personnel work practices, man-machine interface and written procedures and documents play great roles. They are four interaction factors which often come in bundle. If some improvements need to be made on one of them,synchronous measures are also necessary for the others.(3) Management direction and decision process, which are related to management,have a significant interaction with personnel factors. (authors)
Analysis of IFR driver fuel hot channel factors
International Nuclear Information System (INIS)
Ku, J.Y.; Chang, L.K.; Mohr, D.
2004-01-01
Thermal-hydraulic uncertainty factors for Integral Fast Reactor (IFR) driver fuels have been determined based primarily on the database obtained from the predecessor fuels used in the IFR prototype. Experimental Breeder Reactor II. The uncertainty factors were applied to the hot channel factors (HCFs) analyses to obtain separate overall HCFs for fuel and cladding for steady-state analyses. A 'semistatistical horizontal method' was used in the HCFs analyses. The uncertainty factor of the fuel thermal conductivity dominates the effects considered in the HCFs analysis; the uncertainty in fuel thermal conductivity will be reduced as more data are obtained to expand the currently limited database for the IFR ternary metal fuel (U-20Pu-10Zr). A set of uncertainty factors to be used for transient analyses has also been derived. (author)
Confirmatory Factor Analysis of the Procrastination Assessment Scale for Students
Directory of Open Access Journals (Sweden)
Ronald D. Yockey
2015-10-01
Full Text Available The relative fit of one- and two-factor models of the Procrastination Assessment Scale for Students (PASS was investigated using confirmatory factor analysis on an ethnically diverse sample of 345 participants. The results indicated that although the two-factor model provided better fit to the data than the one-factor model, neither model provided optimal fit. However, a two-factor model which accounted for common item theme pairs used by Solomon and Rothblum in the creation of the scale provided good fit to the data. In addition, a significant difference by ethnicity was also found on the fear of failure subscale of the PASS, with Whites having significantly lower scores than Asian Americans or Latino/as. Implications of the results are discussed and recommendations made for future work with the scale.
Two Expectation-Maximization Algorithms for Boolean Factor Analysis
Czech Academy of Sciences Publication Activity Database
Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.
2014-01-01
Roč. 130, 23 April (2014), s. 83-97 ISSN 0925-2312 R&D Projects: GA ČR GAP202/10/0262 Grant - others:GA MŠk(CZ) ED1.1.00/02.0070; GA MŠk(CZ) EE.2.3.20.0073 Program:ED Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean Factor analysis * Binary Matrix factorization * Neural networks * Binary data model * Dimension reduction * Bars problem Subject RIV: IN - Informatics, Computer Science Impact factor: 2.083, year: 2014
Workplace Innovation: Exploratory and Confirmatory Factor Analysis for Construct Validation
Directory of Open Access Journals (Sweden)
Wipulanusat Warit
2017-06-01
Full Text Available Workplace innovation enables the development and improvement of products, processes and services leading simultaneously to improvement in organisational performance. This study has the purpose of examining the factor structure of workplace innovation. Survey data, extracted from the 2014 APS employee census, comprising 3,125 engineering professionals in the Commonwealth of Australia’s departments were analysed using exploratory factor analysis (EFA and confirmatory factor analysis (CFA. EFA returned a two-factor structure explaining 69.1% of the variance of the construct. CFA revealed that a two-factor structure was indicated as a validated model (GFI = 0.98, AGFI = 0.95, RMSEA = 0.08, RMR = 0.02, IFI = 0.98, NFI = 0.98, CFI = 0.98, and TLI = 0.96. Both factors showed good reliability of the scale (Individual creativity: α = 0.83, CR = 0.86, and AVE = 0.62; Team Innovation: α = 0.82, CR = 0.88, and AVE = 0.61. These results confirm that the two factors extracted for characterising workplace innovation included individual creativity and team innovation.
Ranking insurance firms using AHP and Factor Analysis
Directory of Open Access Journals (Sweden)
Mohammad Khodaei Valahzaghard
2013-03-01
Full Text Available Insurance industry includes a significant part of economy and it is important to learn more about the capabilities of different firms, which are active in this industry. In this paper, we present an empirical study to rank the insurance firms using analytical hierarchy process as well as factor analysis. The study considers four criteria including capital adequacy, quality of earning, quality of cash flow and quality of firms’ assets. The results of the implementation of factor analysis (FA have been verified using Kaiser-Meyer-Olkin (KMO=0.573 and Bartlett's Chi-Square (443.267 P-value=0.000 tests. According to the results FA, the first important factor, capital adequacy, represents 21.557% of total variance, the second factor, quality of income, represents 20.958% of total variance. In addition, the third factor, quality of cash flow, represents 19.417% of total variance and the last factor, quality of assets, represents 18.641% of total variance. The study has also used analytical hierarchy process (AHP to rank insurance firms. The results of our survey indicate that capital adequacy (0.559 is accounted as the most important factor followed by quality of income (0.235, quality of cash flow (0.144 and quality of assets (0.061. The results of AHP are consistent with the results of FA, which somewhat validates the overall study.
A factor analysis to find critical success factors in retail brand
Directory of Open Access Journals (Sweden)
Naser Azad
2013-03-01
Full Text Available The present exploratory study aims to find critical components of retail brand among some retail stores. The study seeks to build a brand name in retail level and looks to find important factors affecting it. Customer behavior is largely influenced when the first retail customer experience is formed. These factors have direct impacts on customer experience and satisfaction in retail industry. The proposed study performs an empirical investigation on two well-known retain stores located in city of Tehran, Iran. Using a sample of 265 people from regular customers, the study uses factor analysis and extracts four main factors including related brand, product benefits, customer welfare strategy and corporate profits using the existing 31 factors in the literature.
Worry About Caregiving Performance: A Confirmatory Factor Analysis
Directory of Open Access Journals (Sweden)
Ruijie Li
2018-03-01
Full Text Available Recent studies on the Zarit Burden Interview (ZBI support the existence of a unique factor, worry about caregiving performance (WaP, beyond role and personal strain. Our current study aims to confirm the existence of WaP within the multidimensionality of ZBI and to determine if predictors of WaP differ from the role and personal strain. We performed confirmatory factor analysis (CFA on 466 caregiver-patient dyads to compare between one-factor (total score, two-factor (role/personal strain, three-factor (role/personal strain and WaP, and four-factor models (role strain split into two factors. We conducted linear regression analyses to explore the relationships between different ZBI factors with socio-demographic and disease characteristics, and investigated the stage-dependent differences between WaP with role and personal strain by dyadic relationship. The four-factor structure that incorporated WaP and split role strain into two factors yielded the best fit. Linear regression analyses reveal that different variables significantly predict WaP (adult child caregiver and Neuropsychiatric Inventory Questionnaire (NPI-Q severity from role/personal strain (adult child caregiver, instrumental activities of daily living, and NPI-Q distress. Unlike other factors, WaP was significantly endorsed in early cognitive impairment. Among spouses, WaP remained low across Clinical Dementia Rating (CDR stages until a sharp rise in CDR 3; adult child and sibling caregivers experience a gradual rise throughout the stages. Our results affirm the existence of WaP as a unique factor. Future research should explore the potential of WaP as a possible intervention target to improve self-efficacy in the milder stages of burden.
Cancer risk factors in Korean news media: a content analysis.
Kye, Su Yeon; Kwon, Jeong Hyun; Kim, Yong-Chan; Shim, Minsun; Kim, Jee Hyun; Cho, Hyunsoon; Jung, Kyu Won; Park, Keeho
2015-01-01
Little is known about the news coverage of cancer risk factors in Korea. This study aimed to examine how the news media encompasses a wide array of content regarding cancer risk factors and related cancer sites, and investigate whether news coverage of cancer risk factors is congruent with the actual prevalence of the disease. A content analysis was conducted on 1,138 news stories covered during a 5-year period between 2008 and 2012. The news stories were selected from nationally representative media in Korea. Information was collected about cancer risk factors and cancer sites. Of various cancer risk factors, occupational and environmental exposures appeared most frequently in the news. Breast cancer was mentioned the most in relation to cancer sites. Breast, cervical, prostate, and skin cancer were overrepresented in the media in comparison to incidence and mortality cases, whereas lung, thyroid, liver, and stomach cancer were underrepresented. To our knowledge, this research is the first investigation dealing with news coverage about cancer risk factors in Korea. The study findings show occupational and environmental exposures are emphasized more than personal lifestyle factors; further, more prevalent cancers in developed countries have greater media coverage, not reflecting the realities of the disease. The findings may help health journalists and other health storytellers to develop effective ways to communicate cancer risk factors.
Landslides geotechnical analysis. Qualitative assessment by valuation factors
Cuanalo Oscar, Sc D.; Oliva Aldo, Sc D.; Polanco Gabriel, M. E.
2012-04-01
In general, a landslide can cause a disaster when it is combined a number of factors such as an extreme event related to a geological phenomenon, vulnerable elements exposed in a specific geographic area, and the probability of loss and damage evaluated in terms of lives and economic assets, in a certain period of time. This paper presents the qualitative evaluation of slope stability through of Valuation Factors, obtained from the characterization of the determinants and triggers factors that influence the instability; for the first the morphology and topography, geology, soil mechanics, hydrogeology and vegetation to the second, the rain, earthquakes, erosion and scour, human activity, and ultimately dependent factors of the stability analysis, and its influence ranges which greatly facilitate the selection of construction processes best suited to improve the behavior of a slope or hillside. The Valuation Factors are a set of parameters for assessing the influence of conditioning and triggering factors that influence the stability of slopes and hillsides. The characteristics of each factor must be properly categorized to involve its effect on behavior; a way to do this is by assigning a weighted value range indicating its effect on the stability of a slope. It is proposed to use Valuation Factors with weighted values between 0 and 1 (arbitrarily selected but common sense and logic), the first corresponds to no or minimal effect on stability (no effect or very little influence) and the second, the greatest impact on it (has a significant influence). The meddle effects are evaluated with intermediate values.
Salivary SPECT and factor analysis in Sjoegren's syndrome
International Nuclear Information System (INIS)
Nakamura, T.; Oshiumi, Y.; Yonetsu, K.; Muranaka, T.; Sakai, K.; Kanda, S.; National Fukuoka Central Hospital
1991-01-01
Salivary SPECT and factor analysis in Sjoegren's syndrome were performed in 17 patients and 6 volunteers as controls. The ability of SPECT to detect small differences in the level of uptake can be used to separate glands from background even when uptake is reduced as in the patients with Sjoegren's syndrome. In control and probable Sjoegren's syndrome groups the uptake ratio of the submandibular gland to parotid gland on salivary SPECT (S/P ratio) was less than 1.0. However, in the definite Sjoergren's syndrome group, the ratio was more than 1.0. Moreover, the ratio in all patients with sialectasia, which is characteristic of Sjoegren's syndrome, was more than 1.0. Salivary factor analysis of normal parotid glands showed slowly increasing patterns of uptake and normal submandibular glands had rapidly increasing patterns of uptake. However, in the definite Sjoegren's syndrome group, the factor analysis patterns were altered, with slowly increasing patterns dominating both in the parotid and submandibular glands. These results suggest that the S/P ratio in salivary SPECT and salivary factor analysis provide additional radiologic criteria in diagnosing Sjoegren's syndrome. (orig.)
Genomewide analysis of TCP transcription factor gene family in ...
Indian Academy of Sciences (India)
Home; Journals; Journal of Genetics; Volume 93; Issue 3. Genomewide ... Teosinte branched1/cycloidea/proliferating cell factor1 (TCP) proteins are a large family of transcriptional regulators in angiosperms. They are ... To the best of our knowledge, this is the first study of a genomewide analysis of apple TCP gene family.
Liquidity indicator for the Croatian economy – Factor analysis approach
Directory of Open Access Journals (Sweden)
Mirjana Čižmešija
2014-12-01
Full Text Available Croatian business surveys (BS are conducted in the manufacturing industry, retail trade and construction sector. In all of these sectors, manager´s assessments of liquidity are measured. The aim of the paper was to form a new composite liquidity indicator by including business survey liquidity measures from all three covered economic sectors in the Croatian economy mentioned above. In calculating the leading indicator, a factor analysis approach was used. However, this kind of indicator does not exist in a Croatia or in any other European economy. Furthermore, the issue of Croatian companies´ illiquidity is highly neglected in the literature. The empirical analysis consists of two parts. In the first part the new liquidity indicator was formed using factor analysis. One factor (representing the new liquidity indicator; LI was extracted out of the three liquidity variables in three economic sectors. This factor represents the new liquidity indicator. In the second part, econometric models were applied in order to investigate the forecasting properties of the new business survey liquidity indicator, when predicting the direction of changes in Croatian industrial production. The quarterly data used in the research covered the period from January 2000 to April 2013. Based on econometric analysis, it can be concluded that the LI is a leading indicator of Croatia’s industrial production with better forecasting properties then the standard liquidity indicators (formed in a manufacturing industry.
Modular Open-Source Software for Item Factor Analysis
Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M.
2015-01-01
This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…
A Confirmatory Factor Analysis of Reilly's Role Overload Scale
Thiagarajan, Palaniappan; Chakrabarty, Subhra; Taylor, Ronald D.
2006-01-01
In 1982, Reilly developed a 13-item scale to measure role overload. This scale has been widely used, but most studies did not assess the unidimensionality of the scale. Given the significance of unidimensionality in scale development, the current study reports a confirmatory factor analysis of the 13-item scale in two samples. Based on the…
48 CFR 1615.404-70 - Profit analysis factors.
2010-10-01
... CONTRACTING BY NEGOTIATION Contract Pricing 1615.404-70 Profit analysis factors. (a) OPM contracting officers... managerial expertise and effort. Evidence of effective contract performance will receive a plus weight, and... indifference to cost control will generally result in a negative weight. (2) Contract cost risk. In assessing...
A methodology to incorporate organizational factors into human reliability analysis
International Nuclear Information System (INIS)
Li Pengcheng; Chen Guohua; Zhang Li; Xiao Dongsheng
2010-01-01
A new holistic methodology for Human Reliability Analysis (HRA) is proposed to model the effects of the organizational factors on the human reliability. Firstly, a conceptual framework is built, which is used to analyze the causal relationships between the organizational factors and human reliability. Then, the inference model for Human Reliability Analysis is built by combining the conceptual framework with Bayesian networks, which is used to execute the causal inference and diagnostic inference of human reliability. Finally, a case example is presented to demonstrate the specific application of the proposed methodology. The results show that the proposed methodology of combining the conceptual model with Bayesian Networks can not only easily model the causal relationship between organizational factors and human reliability, but in a given context, people can quantitatively measure the human operational reliability, and identify the most likely root causes or the prioritization of root causes caused human error. (authors)
Identifying influential factors of business process performance using dependency analysis
Wetzstein, Branimir; Leitner, Philipp; Rosenberg, Florian; Dustdar, Schahram; Leymann, Frank
2011-02-01
We present a comprehensive framework for identifying influential factors of business process performance. In particular, our approach combines monitoring of process events and Quality of Service (QoS) measurements with dependency analysis to effectively identify influential factors. The framework uses data mining techniques to construct tree structures to represent dependencies of a key performance indicator (KPI) on process and QoS metrics. These dependency trees allow business analysts to determine how process KPIs depend on lower-level process metrics and QoS characteristics of the IT infrastructure. The structure of the dependencies enables a drill-down analysis of single factors of influence to gain a deeper knowledge why certain KPI targets are not met.
Nested Sampling with Constrained Hamiltonian Monte Carlo
Betancourt, M. J.
2010-01-01
Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.
Zubir, S. N. A.; Thiruchelvam, S.; Mustapha, K. N. M.; Che Muda, Z.; Ghazali, A.; Hakimie, H.
2017-12-01
For the past few years, natural disaster has been the subject of debate in disaster management especially in flood disaster. Each year, natural disaster results in significant loss of life, destruction of homes and public infrastructure, and economic hardship. Hence, an effective and efficient flood disaster management would assure non-futile efforts for life saving. The aim of this article is to examine the relationship between approach, decision maker, influence factor, result, and ethic to decision making for flood disaster management in Malaysia. The key elements of decision making in the disaster management were studied based on the literature. Questionnaire surveys were administered among lead agencies at East Coast of Malaysia in the state of Kelantan and Pahang. A total of 307 valid responses had been obtained for further analysis. Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were carried out to analyse the measurement model involved in the study. The CFA for second-order reflective and first-order reflective measurement model indicates that approach, decision maker, influence factor, result, and ethic have a significant and direct effect on decision making during disaster. The results from this study showed that decision- making during disaster is an important element for disaster management to necessitate a successful collaborative decision making. The measurement model is accepted to proceed with further analysis known as Structural Equation Modeling (SEM) and can be assessed for the future research.
A human factor analysis of a radiotherapy accident
International Nuclear Information System (INIS)
Thellier, S.
2009-01-01
Since September 2005, I.R.S.N. studies activities of radiotherapy treatment from the angle of the human and organizational factors to improve the reliability of treatment in radiotherapy. Experienced in nuclear industry incidents analysis, I.R.S.N. analysed and diffused in March 2008, for the first time in France, the detailed study of a radiotherapy accident from the angle of the human and organizational factors. The method used for analysis is based on interviews and documents kept by the hospital. This analysis aimed at identifying the causes of the difference recorded between the dose prescribed by the radiotherapist and the dose effectively received by the patient. Neither verbal nor written communication (intra-service meetings and protocols of treatment) allowed information to be transmitted correctly in order to permit radiographers to adjust the irradiation zones correctly. This analysis highlighted the fact that during the preparation and the carrying out of the treatment, various factors led planned controls to not be performed. Finally, this analysis highlighted the fact that unsolved areas persist in the report over this accident. This is due to a lack of traceability of a certain number of key actions. The article concluded that there must be improvement in three areas: cooperation between the practitioners, control of the actions and traceability of the actions. (author)
Constrained minimization in C ++ environment
International Nuclear Information System (INIS)
Dymov, S.N.; Kurbatov, V.S.; Silin, I.N.; Yashchenko, S.V.
1998-01-01
Based on the ideas, proposed by one of the authors (I.N.Silin), the suitable software was developed for constrained data fitting. Constraints may be of the arbitrary type: equalities and inequalities. The simplest of possible ways was used. Widely known program FUMILI was realized to the C ++ language. Constraints in the form of inequalities φ (θ i ) ≥ a were taken into account by change into equalities φ (θ i ) = t and simple inequalities of type t ≥ a. The equalities were taken into account by means of quadratic penalty functions. The suitable software was tested on the model data of the ANKE setup (COSY accelerator, Forschungszentrum Juelich, Germany)
Coherent states in constrained systems
International Nuclear Information System (INIS)
Nakamura, M.; Kojima, K.
2001-01-01
When quantizing the constrained systems, there often arise the quantum corrections due to the non-commutativity in the re-ordering of constraint operators in the products of operators. In the bosonic second-class constraints, furthermore, the quantum corrections caused by the uncertainty principle should be taken into account. In order to treat these corrections simultaneously, the alternative projection technique of operators is proposed by introducing the available minimal uncertainty states of the constraint operators. Using this projection technique together with the projection operator method (POM), these two kinds of quantum corrections were investigated
Lewis, Josiah B.; Floss, Christine; Gyngard, Frank
2018-01-01
Meteoritic nanodiamonds carry noble gases with anomalies in their stable isotopes that have drawn attention to their potentially presolar origin. Measurements of 12C/13C isotope ratios of presolar nanodiamonds are essential to understanding their origins, but bulk studies do not show notable deviations from the solar system 12C/13C ratio. We implemented a technique using secondary ion mass spectrometry with maximized spatial resolution to measure carbon isotopes in the smallest clusters of nanodiamonds possible. We measured C and Si from clusters containing as few as 1000 nanodiamonds, the smallest clusters of nanodiamonds measured to date by traditional mass spectrometry. This allowed us to investigate many possible complex compositions of the nanodiamonds, both through direct methods and statistical analysis of the distributions of observed isotopic ratios. Analysis of the breadth of distributions of carbon isotopic ratios for a number of ∼1000-nanodiamond aggregates indicates that the 12C/13C ratio may be drawn from multiple Gaussian distributions about different isotopic ratios, which implies the presence of presolar material. The mean isotopic ratio is consistent with the solar system value, so presolar components are required to be either low in concentration, or to have a mean ratio close to that of the solar system. Supernovae are likely candidates for the source of such a presolar component, although asymptotic giant branch stars are not excluded. A few aggregates show deviations from the mean 12C/13C ratio large enough to be borderline detections of enrichments in 13C. These could be caused by the presence of a small population of nanodiamonds formed from sources that produce extremely 13C-rich material, such as J-stars, novae, born-again asymptotic giant branch stars, or supernovae. Of these possible sources, only supernovae would account for the anomalous noble gases carried in the nanodiamonds.
EMPLOYMENT LEVEL ANALYSIS FROM THE DETERMINANT FACTORS PERSPECTIVE
Directory of Open Access Journals (Sweden)
Elena Diana ŞERB
2016-02-01
Full Text Available Neglecting the human factor as part of the labor market causes losses for society as any activity that is initiated within it, has as a starting point, and also as a finishing point, the human intervention. The starting point of the article is represented by the projections made by the European Commission in the Population Ageing Report in 2015 underlying assumptions and projections, and also by the projections of the United Nations report in 2015, and this resulted in many conclusions including the one that for the first time in Romania the average aging in 2015 exceeds the values measured by EU till present day, and this is reflected in the employment level (active aging population. The hypothesis behind the article is that the evolution of the population and migrants has repercussions on employment. Structured in three parts: knowledge status, the analysis of employment indicators and information about the intensity and direction of the link between a number of factors and employment level, this article aims to establish the determinant factors of employment through a research focused on the analysis of secondary sources, and also using the regression model. The most important lesson learned as a result of this research is that the labor market works with a variety of factors with a higher or lower influence, and in turn the labor market influences other factors.
Arabidopsis transcription factors: genome-wide comparative analysis among eukaryotes.
Riechmann, J L; Heard, J; Martin, G; Reuber, L; Jiang, C; Keddie, J; Adam, L; Pineda, O; Ratcliffe, O J; Samaha, R R; Creelman, R; Pilgrim, M; Broun, P; Zhang, J Z; Ghandehari, D; Sherman, B K; Yu, G
2000-12-15
The completion of the Arabidopsis thaliana genome sequence allows a comparative analysis of transcriptional regulators across the three eukaryotic kingdoms. Arabidopsis dedicates over 5% of its genome to code for more than 1500 transcription factors, about 45% of which are from families specific to plants. Arabidopsis transcription factors that belong to families common to all eukaryotes do not share significant similarity with those of the other kingdoms beyond the conserved DNA binding domains, many of which have been arranged in combinations specific to each lineage. The genome-wide comparison reveals the evolutionary generation of diversity in the regulation of transcription.
Directory of Open Access Journals (Sweden)
Enrico Sciubba
2011-06-01
Full Text Available In this paper, the entropy generation minimization (EGM method is applied to an industrial heat transfer problem: the forced convective cooling of a LED-based spotlight. The design specification calls for eighteen diodes arranged on a circular copper plate of 35 mm diameter. Every diode dissipates 3 W and the maximum allowedtemperature of the plate is 80 °C. The cooling relies on the forced convection driven by a jet of air impinging on the plate. An initial complex geometry of plate fins is presented and analyzed with a commercial CFD code that computes the entropy generation rate. A pseudo-optimization process is carried out via a successive series of design modifications based on a careful analysis of the entropy generation maps. One of the advantages of the EGM method is that the rationale behind each step of the design process can be justified on a physical basis. It is found that the best performance is attained when the fins are periodically spaced in the radial direction.
Factors Affecting Green Residential Building Development: Social Network Analysis
Directory of Open Access Journals (Sweden)
Xiaodong Yang
2018-05-01
Full Text Available Green residential buildings (GRBs are one of the effective practices of energy saving and emission reduction in the construction industry. However, many real estate developers in China are less willing to develop GRBs, because of the factors affecting green residential building development (GRBD. In order to promote the sustainable development of GRBs in China, this paper, based on the perspective of real estate developers, identifies the influential and critical factors affecting GRBD, using the method of social network analysis (SNA. Firstly, 14 factors affecting GRBD are determined from 64 preliminary factors of three main elements, and the framework is established. Secondly, the relationships between the 14 factors are analyzed by SNA. Finally, four critical factors for GRBD, which are on the local economy development level, development strategy and innovation orientation, developer’s acknowledgement and positioning for GRBD, and experience and ability for GRBD, are identified by the social network centrality test. The findings illustrate the key issues that affect the development of GRBs, and provide references for policy making by the government and strategy formulation by real estate developers.
Profile and Risk Factor Analysis of Unintentional Injuries in Children.
Bhamkar, Rahul; Seth, Bageshree; Setia, Maninder Singh
2016-10-01
To study the profile and various risk factors associated with unintentional injuries in children. The study is a cross sectional analysis of data collected from 351 children presenting with unintentional injury to a tertiary care hospital in Navi Mumbai, India. Data were collected about variables based on Haddon Phase Factor Matrix - host, environment and agent factors. Proportions for categorical variables across various groups were compared using Chi square test or Fisher's exact test. Logistic regression model was used to evaluate the factors. Falls (36 %) were the most common injuries followed by bites (23 %). Majority of children were school going children (38 %) followed by preschool children (29 %). Forty-seven percent were from lower socioeconomic class. Commonest place of injury was home (48 %) and the commonest time was evening (49 %). Though there was male predominance in injuries, the difference across gender did not vary significantly (p = 0.15). Poisonings were significantly more common in infants and toddlers and in rural population (p risk of bites compared to urban (p Profile of injuries varies widely as per the variations in agent, host and environmental factors. Socio-environmental, economic conditions and infancy-toddler age groups are predisposing risk factors for bites and poisoning. Although rural areas and lower socioeconomic class population are more vulnerable to serious types of injuries, they still lack essential basic medical care.
Exploratory Factor Analysis With Small Samples and Missing Data.
McNeish, Daniel
2017-01-01
Exploratory factor analysis (EFA) is an extremely popular method for determining the underlying factor structure for a set of variables. Due to its exploratory nature, EFA is notorious for being conducted with small sample sizes, and recent reviews of psychological research have reported that between 40% and 60% of applied studies have 200 or fewer observations. Recent methodological studies have addressed small size requirements for EFA models; however, these models have only considered complete data, which are the exception rather than the rule in psychology. Furthermore, the extant literature on missing data techniques with small samples is scant, and nearly all existing studies focus on topics that are not of primary interest to EFA models. Therefore, this article presents a simulation to assess the performance of various missing data techniques for EFA models with both small samples and missing data. Results show that deletion methods do not extract the proper number of factors and estimate the factor loadings with severe bias, even when data are missing completely at random. Predictive mean matching is the best method overall when considering extracting the correct number of factors and estimating factor loadings without bias, although 2-stage estimation was a close second.
Qualitative analysis of factors leading to clinical incidents.
Smith, Matthew D; Birch, Julian D; Renshaw, Mark; Ottewill, Melanie
2013-01-01
The purpose of this paper is to evaluate the common themes leading or contributing to clinical incidents in a UK teaching hospital. A root-cause analysis was conducted on patient safety incidents. Commonly occurring root causes and contributing factors were collected and correlated with incident timing and severity. In total, 65 root-cause analyses were reviewed, highlighting 202 factors implicated in the clinical incidents and 69 categories were identified. The 14 most commonly occurring causes (encountered in four incidents or more) were examined as a key-root or contributory cause. Incident timing was also analysed; common factors were encountered more frequently during out-hours--occurring as contributory rather than a key-root cause. In total, 14 commonly occurring factors were identified to direct interventions that could prevent many clinical incidents. From these, an "Organisational Safety Checklist" was developed to involve departmental level clinicians to monitor practice. This study demonstrates that comprehensively investigating incidents highlights common factors that can be addressed at a local level. Resilience against clinical incidents is low during out-of-hours periods, where factors such as lower staffing levels and poor service provision allows problems to escalate and become clinical incidents, which adds to the literature regarding out-of-hours care provision and should prove useful to those organising hospital services at departmental and management levels.
Timms, Nick; Nemchin, Alexander; Grange, Marion; Reddy, Steve; Pidgeon, Bob; Geisler, Thorsten; Meyer, Chuck
2009-01-01
The evolution of the early moon was dominated by two processes (i) crystallization of the Lunar Magma Ocean (LMO) and differentiation of potassium-rare earth element-phosphorous-rich residual magma reservoir referred to as KREEP, and (ii) an intense meteorite bombardment referred to as lunar cataclysm . The exact timing of these processes is disputed, and resolution relies on collection and interpretation of precise age data. This study examines the microstructure and geochronology of zircon from lunar impact breccias collected during the Apollo 17 mission. A large zircon clast within lunar breccia 72215,195 shows sector zoning in optical microscopy, cathodoluminescence (CL) imaging and Raman mapping, and indicates that it was a relict fragment of a much larger magmatic grain. Sensitive high resolution ion microprobe (SHRIMP) U-Pb analysis of the zircon shows that U and Th concentration correlate with sector zoning, with darkest CL domains corresponding with high-U and Th (approx.150 and approx.100 ppm respectively), and the brightest-CL sectors containing approx.30-50 ppm U and approx.10-20 ppm Th. This indicates that variations in optical CL and Raman properties correspond to differential accumulation of alpha-radiation damage in each sector. Electron backscatter diffraction (EBSD) mapping shows that the quality of electron backscatter patterns (band contrast) varies with sector zoning, with the poorest quality patterns obtained from high-U and Th, dark-CL zones. EBSD mapping also reveals a deformation microstructure that is cryptic in optical, CL and Raman imaging. Two orthogonal sets of straight discrete and gradational low-angle boundaries accommodate approx.12 misorientation across the grain. The deformation bands are parallel to the crystallographic {a}-planes of the zircon, have misorientation axes parallel to the c-axis, and are geometrically consistent with formation by dislocation creep associated with {010} slip. The deformation bands are unlike curved
Tensor-Dictionary Learning with Deep Kruskal-Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
Stevens, Andrew J.; Pu, Yunchen; Sun, Yannan; Spell, Gregory; Carin, Lawrence
2017-04-20
We introduce new dictionary learning methods for tensor-variate data of any order. We represent each data item as a sum of Kruskal decomposed dictionary atoms within the framework of beta-process factor analysis (BPFA). Our model is nonparametric and can infer the tensor-rank of each dictionary atom. This Kruskal-Factor Analysis (KFA) is a natural generalization of BPFA. We also extend KFA to a deep convolutional setting and develop online learning methods. We test our approach on image processing and classification tasks achieving state of the art results for 2D & 3D inpainting and Caltech 101. The experiments also show that atom-rank impacts both overcompleteness and sparsity.
Human factors and fuzzy set theory for safety analysis
International Nuclear Information System (INIS)
Nishiwaki, Y.
1987-01-01
Human reliability and performance is affected by many factors: medical, physiological and psychological, etc. The uncertainty involved in human factors may not necessarily be probabilistic, but fuzzy. Therefore, it is important to develop a theory by which both the non-probabilistic uncertainties, or fuzziness, of human factors and the probabilistic properties of machines can be treated consistently. In reality, randomness and fuzziness are sometimes mixed. From the mathematical point of view, probabilistic measures may be considered a special case of fuzzy measures. Therefore, fuzzy set theory seems to be an effective tool for analysing man-machine systems. The concept 'failure possibility' based on fuzzy sets is suggested as an approach to safety analysis and fault diagnosis of a large complex system. Fuzzy measures and fuzzy integrals are introduced and their possible applications are also discussed. (author)
Sea level rise and the geoid: factor analysis approach
Directory of Open Access Journals (Sweden)
Alexey Sadovski
2013-08-01
Full Text Available Sea levels are rising around the world, and this is a particular concern along most of the coasts of the United States. A 1989 EPA report shows that sea levels rose 5-6 inches more than the global average along the Mid-Atlantic and Gulf Coasts in the last century. The main reason for this is coastal land subsidence. This sea level rise is considered more as relative sea level rise than global sea level rise. Thus, instead of studying sea level rise globally, this paper describes a statistical approach by using factor analysis of regional sea level rates of change. Unlike physical models and semi-empirical models that attempt to approach how much and how fast sea levels are changing, this methodology allows for a discussion of the factor(s that statistically affects sea level rates of change, and seeks patterns to explain spatial correlations.
Memory systems, processes, and tasks: taxonomic clarification via factor analysis.
Bruss, Peter J; Mitchell, David B
2009-01-01
The nature of various memory systems was examined using factor analysis. We reanalyzed data from 11 memory tasks previously reported in Mitchell and Bruss (2003). Four well-defined factors emerged, closely resembling episodic and semantic memory and conceptual and perceptual implicit memory, in line with both memory systems and transfer-appropriate processing accounts. To explore taxonomic issues, we ran separate analyses on the implicit tasks. Using a cross-format manipulation (pictures vs. words), we identified 3 prototypical tasks. Word fragment completion and picture fragment identification tasks were "factor pure," tapping perceptual processes uniquely. Category exemplar generation revealed its conceptual nature, yielding both cross-format priming and a picture superiority effect. In contrast, word stem completion and picture naming were more complex, revealing attributes of both processes.
ANALYSIS OF FACTORS WHICH AFFECTING THE ECONOMIC GROWTH
Directory of Open Access Journals (Sweden)
Suparna Wijaya
2017-03-01
Full Text Available High economic growth and sustainable process are main conditions for sustainability of economic country development. They are also become measures of the success of the country's economy. Factors which tested in this study are economic and non-economic factors which impacting economic development. This study has a goal to explain the factors that influence on macroeconomic Indonesia. It used linear regression modeling approach. The analysis result showed that Tax Amnesty, Exchange Rate, Inflation, and interest rate, they jointly can bring effect which amounted to 77.6% on economic growth whereas the remaining 22.4% is the influenced by other variables which not observed in this study. Keywords: tax amnesty, exchange rates, inflation, SBI and economic growth
Risk factor analysis of equine strongyle resistance to anthelmintics
Directory of Open Access Journals (Sweden)
G. Sallé
2017-12-01
Full Text Available Intestinal strongyles are the most problematic endoparasites of equids as a result of their wide distribution and the spread of resistant isolates throughout the world. While abundant literature can be found on the extent of anthelmintic resistance across continents, empirical knowledge about associated risk factors is missing. This study brought together results from anthelmintic efficacy testing and risk factor analysis to provide evidence-based guidelines in the field. It involved 688 horses from 39 French horse farms and riding schools to both estimate Faecal Egg Count Reduction (FECR after anthelmintic treatment and to interview farm and riding school managers about their practices. Risk factors associated with reduced anthelmintic efficacy in equine strongyles were estimated across drugs using a marginal modelling approach. Results demonstrated ivermectin efficacy (96.3% ± 14.5% FECR, the inefficacy of fenbendazole (42.8% ± 33.4% FECR and an intermediate profile for pyrantel (90.3% ± 19.6% FECR. Risk factor analysis provided support to advocate for FEC-based treatment regimens combined with individual anthelmintic dosage and the enforcement of tighter biosecurity around horse introduction. The combination of these measures resulted in a decreased risk of drug resistance (relative risk of 0.57, p = 0.02. Premises falling under this typology also relied more on their veterinarians suggesting practitionners play an important role in the sustainability of anthelmintic usage. Similarly, drug resistance risk was halved in premises with frequent pasture rotation and with stocking rate below five horses/ha (relative risk of 0.53, p < 0.01. This is the first empirical risk factor analysis for anthelmintic resistance in equids. Our findings should guide the implementation of more sustained strongyle management in the field. Keywords: Horse, Nematode, Anthelmintic resistance, Strongyle, Cyathostomin
Human Modeling for Ground Processing Human Factors Engineering Analysis
Stambolian, Damon B.; Lawrence, Brad A.; Stelges, Katrine S.; Steady, Marie-Jeanne O.; Ridgwell, Lora C.; Mills, Robert E.; Henderson, Gena; Tran, Donald; Barth, Tim
2011-01-01
There have been many advancements and accomplishments over the last few years using human modeling for human factors engineering analysis for design of spacecraft. The key methods used for this are motion capture and computer generated human models. The focus of this paper is to explain the human modeling currently used at Kennedy Space Center (KSC), and to explain the future plans for human modeling for future spacecraft designs
Absorption correction factor in X-ray fluorescent quantitative analysis
International Nuclear Information System (INIS)
Pimjun, S.
1994-01-01
An experiment on absorption correction factor in X-ray fluorescent quantitative analysis were carried out. Standard samples were prepared from the mixture of Fe 2 O 3 and tapioca flour at various concentration of Fe 2 O 3 ranging from 5% to 25%. Unknown samples were kaolin containing 3.5% to-50% of Fe 2 O 3 Kaolin samples were diluted with tapioca flour in order to reduce the absorption of FeK α and make them easy to prepare. Pressed samples with 0.150 /cm 2 and 2.76 cm in diameter, were used in the experiment. Absorption correction factor is related to total mass absorption coefficient (χ) which varied with sample composition. In known sample, χ can be calculated by conveniently the formula. However in unknown sample, χ can be determined by Emission-Transmission method. It was found that the relationship between corrected FeK α intensity and contents of Fe 2 O 3 in these samples was linear. This result indicate that this correction factor can be used to adjust the accuracy of X-ray intensity. Therefore, this correction factor is essential in quantitative analysis of elements comprising in any sample by X-ray fluorescent technique
Seismic analysis response factors and design margins of piping systems
International Nuclear Information System (INIS)
Shieh, L.C.; Tsai, N.C.; Yang, M.S.; Wong, W.L.
1985-01-01
The objective of the simplified methods project of the Seismic Safety Margins Research Program is to develop a simplified seismic risk methodology for general use. The goal is to reduce seismic PRA costs to roughly 60 man-months over a 6 to 8 month period, without compromising the quality of the product. To achieve the goal, it is necessary to simplify the calculational procedure of the seismic response. The response factor approach serves this purpose. The response factor relates the median level response to the design data. Through a literature survey, we identified the various seismic analysis methods adopted in the U.S. nuclear industry for the piping system. A series of seismic response calculations was performed. The response factors and their variabilities for each method of analysis were computed. A sensitivity study of the effect of piping damping, in-structure response spectra envelop method, and analysis method was conducted. In addition, design margins, which relate the best-estimate response to the design data, are also presented
Exploratory Analysis of the Factors Affecting Consumer Choice in E-Commerce: Conjoint Analysis
Directory of Open Access Journals (Sweden)
Elena Mazurova
2017-05-01
Full Text Available According to previous studies of online consumer behaviour, three factors are the most influential on purchasing behavior - brand, colour and position of the product on the screen. However, a simultaneous influence of these three factors on the consumer decision making process has not been investigated previously. In this particular work we aim to execute a comprehensive study of the influence of these three factors. In order to answer our main research questions, we conducted an experiment with 96 different combinations of the three attributes, and using statistical analysis, such as conjoint analysis, t-test analysis and Kendall analysis we identified that the most influential factor to the online consumer decision making process is brand, the second most important attribute is the colour, which was estimated half as important as brand, and the least important attribute is the position on the screen. Additionally, we identified the main differences regarding consumers stated and revealed preferences regarding these three attributes.
Dispersion-theoretical analysis of the nucleon electromagnetic form factors
Energy Technology Data Exchange (ETDEWEB)
Belushkin, M.
2007-09-29
The structure of the proton and the neutron is of fundamental importance for the study of the strong interaction dynamics over a wide range of momentum transfers. The nucleon form factors encode information on the internal structure of the nucleon as probed by the electromagnetic interaction, and, to a certain extent, reflect the charge and magnetisation distributions within the proton and the neutron. In this thesis we report on our investigation of the electromagnetic form factors of the proton and the neutron with dispersion relation techniques, including known experimental input on the {pi}{pi}, K anti K and the {rho}{pi} continua and perturbative QCD constraints. We include new experimental data on the pion form factor and the nucleon form factors in our simultaneous analysis of all four form factors in both the space- and the timelike regions for all momentum transfers, and perform Monte- Carlo sampling in order to obtain theoretical uncertainty bands. Finally, we discuss the implications of our results on the pion cloud of the nucleon, the nucleon radii and the Okubo-Zweig-Iizuka rule, and present our results of a model-independent approach to estimating two-photon effects in elastic electron-proton scattering. (orig.)
Dispersion-theoretical analysis of the nucleon electromagnetic form factors
International Nuclear Information System (INIS)
Belushkin, M.
2007-01-01
The structure of the proton and the neutron is of fundamental importance for the study of the strong interaction dynamics over a wide range of momentum transfers. The nucleon form factors encode information on the internal structure of the nucleon as probed by the electromagnetic interaction, and, to a certain extent, reflect the charge and magnetisation distributions within the proton and the neutron. In this thesis we report on our investigation of the electromagnetic form factors of the proton and the neutron with dispersion relation techniques, including known experimental input on the ππ, K anti K and the ρπ continua and perturbative QCD constraints. We include new experimental data on the pion form factor and the nucleon form factors in our simultaneous analysis of all four form factors in both the space- and the timelike regions for all momentum transfers, and perform Monte- Carlo sampling in order to obtain theoretical uncertainty bands. Finally, we discuss the implications of our results on the pion cloud of the nucleon, the nucleon radii and the Okubo-Zweig-Iizuka rule, and present our results of a model-independent approach to estimating two-photon effects in elastic electron-proton scattering. (orig.)
Formal language constrained path problems
Energy Technology Data Exchange (ETDEWEB)
Barrett, C.; Jacob, R.; Marathe, M.
1997-07-08
In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.
Phasor analysis of binary diffraction gratings with different fill factors
International Nuclear Information System (INIS)
MartInez, Antonio; Sanchez-Lopez, Ma del Mar; Moreno, Ignacio
2007-01-01
In this work, we present a simple analysis of binary diffraction gratings with different slit widths relative to the grating period. The analysis is based on a simple phasor technique directly derived from the Huygens principle. By introducing a slit phasor and a grating phasor, the intensity of the diffracted orders and the grating's resolving power can be easily obtained without applying the usual Fourier transform operations required for these calculations. The proposed phasor technique is mathematically equivalent to the Fourier transform calculation of the diffraction order amplitude, and it can be useful to explain binary diffraction gratings in a simple manner in introductory physics courses. This theoretical analysis is illustrated with experimental results using a liquid crystal device to display diffraction gratings with different fill factors
Phasor analysis of binary diffraction gratings with different fill factors
Energy Technology Data Exchange (ETDEWEB)
MartInez, Antonio [Departamento de Ciencia de Materiales, Optica y TecnologIa Electronica, Universidad Miguel Hernandez, 03202 Elche (Spain); Sanchez-Lopez, Ma del Mar [Instituto de BioingenierIa y Departamento de Fisica y Arquitectura de Computadores, Universidad Miguel Hernandez, 03202 Elche (Spain); Moreno, Ignacio [Departamento de Ciencia de Materiales, Optica y TecnologIa Electronica, Universidad Miguel Hernandez, 03202 Elche (Spain)
2007-09-11
In this work, we present a simple analysis of binary diffraction gratings with different slit widths relative to the grating period. The analysis is based on a simple phasor technique directly derived from the Huygens principle. By introducing a slit phasor and a grating phasor, the intensity of the diffracted orders and the grating's resolving power can be easily obtained without applying the usual Fourier transform operations required for these calculations. The proposed phasor technique is mathematically equivalent to the Fourier transform calculation of the diffraction order amplitude, and it can be useful to explain binary diffraction gratings in a simple manner in introductory physics courses. This theoretical analysis is illustrated with experimental results using a liquid crystal device to display diffraction gratings with different fill factors.
Biosphere dose conversion Factor Importance and Sensitivity Analysis
International Nuclear Information System (INIS)
M. Wasiolek
2004-01-01
This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty
Zhang, Shanyong; Yang, Lili; Peng, Chuangang; Wu, Minfei
2018-02-01
The aim of the present study was to investigate the risk factors for postoperative recurrence of spinal tumors by logistic regression analysis and analysis of prognostic factors. In total, 77 male and 48 female patients with spinal tumor were selected in our hospital from January, 2010 to December, 2015 and divided into the benign (n=76) and malignant groups (n=49). All the patients underwent microsurgical resection of spinal tumors and were reviewed regularly 3 months after operation. The McCormick grading system was used to evaluate the postoperative spinal cord function. Data were subjected to statistical analysis. Of the 125 cases, 63 cases showed improvement after operation, 50 cases were stable, and deterioration was found in 12 cases. The improvement rate of patients with cervical spine tumor, which reached 56.3%, was the highest. Fifty-two cases of sensory disturbance, 34 cases of pain, 30 cases of inability to exercise, 26 cases of ataxia, and 12 cases of sphincter disorders were found after operation. Seventy-two cases (57.6%) underwent total resection, 18 cases (14.4%) received subtotal resection, 23 cases (18.4%) received partial resection, and 12 cases (9.6%) were only treated with biopsy/decompression. Postoperative recurrence was found in 57 cases (45.6%). The mean recurrence time of patients in the malignant group was 27.49±6.09 months, and the mean recurrence time of patients in the benign group was 40.62±4.34. The results were significantly different (Pregression analysis of total resection-related factors showed that total resection should be the preferred treatment for patients with benign tumors, thoracic and lumbosacral tumors, and lower McCormick grade, as well as patients without syringomyelia and intramedullary tumors. Logistic regression analysis of recurrence-related factors revealed that the recurrence rate was relatively higher in patients with malignant, cervical, thoracic and lumbosacral, intramedullary tumors, and higher Mc
Constrained least squares regularization in PET
International Nuclear Information System (INIS)
Choudhury, K.R.; O'Sullivan, F.O.
1996-01-01
Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort
Human factors review for Severe Accident Sequence Analysis (SASA)
International Nuclear Information System (INIS)
Krois, P.A.; Haas, P.M.; Manning, J.J.; Bovell, C.R.
1984-01-01
The paper will discuss work being conducted during this human factors review including: (1) support of the Severe Accident Sequence Analysis (SASA) Program based on an assessment of operator actions, and (2) development of a descriptive model of operator severe accident management. Research by SASA analysts on the Browns Ferry Unit One (BF1) anticipated transient without scram (ATWS) was supported through a concurrent assessment of operator performance to demonstrate contributions to SASA analyses from human factors data and methods. A descriptive model was developed called the Function Oriented Accident Management (FOAM) model, which serves as a structure for bridging human factors, operations, and engineering expertise and which is useful for identifying needs/deficiencies in the area of accident management. The assessment of human factors issues related to ATWS required extensive coordination with SASA analysts. The analysis was consolidated primarily to six operator actions identified in the Emergency Procedure Guidelines (EPGs) as being the most critical to the accident sequence. These actions were assessed through simulator exercises, qualitative reviews, and quantitative human reliability analyses. The FOAM descriptive model assumes as a starting point that multiple operator/system failures exceed the scope of procedures and necessitates a knowledge-based emergency response by the operators. The FOAM model provides a functionally-oriented structure for assembling human factors, operations, and engineering data and expertise into operator guidance for unconventional emergency responses to mitigate severe accident progression and avoid/minimize core degradation. Operators must also respond to potential radiological release beyond plant protective barriers. Research needs in accident management and potential uses of the FOAM model are described. 11 references, 1 figure
Analysis on risk factors for post-stroke emotional incontinence
Directory of Open Access Journals (Sweden)
Xiao-chun ZHANG
2018-01-01
Full Text Available Objective To investigate the occurrence rate and related risk factors for post-stroke emotional incontinence (PSEI. Methods The clinical data [sex, age, body mass index (BMI, education, marital status, medical history (hypertension, heart disease, diabetes, hyperlipemia, smoking and drinking and family history of stroke] of 162 stroke patients were recorded. Serum homocysteine (Hcy level was examined. Head CT and/or MRI were used to indicate stroke subtype, site of lesion and number of lesion. Diagnostic and Statistical Manual of Mental Disorders Fifth Edition (DSM-Ⅴ Chinese version and Hamilton Depression Rating Scale-17 Items (HAMD-17 were used to evaluate the degree of depression. House diagnostic standard was used to diagnose PSEI. Univariate and multivariate backward Logistic regression analysis was used to screen related risk factor for PSEI. Spearman rank correlation analysis was used to discuss the correlation between PSEI and post-stroke depression (PSD. Results Among 162 stroke patients, 12 cases were diagnosed as PSEI (7.41% . The ratio of age < 60 years in PSEI group was significantly higher than non-PSEI group (P = 0.045. The ratio of smoking in PSEI group was significantly lower than non-PSEI group (P = 0.036. Univariate and multivariate backward Logistic regression analysis showed age < 60 years was independent risk factor for PSEI (OR = 4.000, 95%CI: 1.149-13.924; P = 0.029. Ten cases were combined with PSD in 12 PSEI patients, and the co-morbidity rate of PSEI and PSD was83.33%. Spearman rank correlation analysis showed PSEI was positively related to PSD (rs = 0.305, P = 0.000. Conclusions PSEI is common affective disorder in stroke patients, which easily happens in patients under 60 years of age. DOI: 10.3969/j.issn.1672-6731.2017.12.010
Should we still believe in constrained supersymmetry?
International Nuclear Information System (INIS)
Balazs, Csaba; Buckley, Andy; Carter, Daniel; Farmer, Benjamin; White, Martin
2013-01-01
We calculate partial Bayes factors to quantify how the feasibility of the constrained minimal supersymmetric standard model (CMSSM) has changed in the light of a series of observations. This is done in the Bayesian spirit where probability reflects a degree of belief in a proposition and Bayes' theorem tells us how to update it after acquiring new information. Our experimental baseline is the approximate knowledge that was available before LEP, and our comparison model is the Standard Model with a simple dark matter candidate. To quantify the amount by which experiments have altered our relative belief in the CMSSM since the baseline data we compute the partial Bayes factors that arise from learning in sequence the LEP Higgs constraints, the XENON100 dark matter constraints, the 2011 LHC supersymmetry search results, and the early 2012 LHC Higgs search results. We find that LEP and the LHC strongly shatter our trust in the CMSSM (with M 0 and M 1/2 below 2 TeV), reducing its posterior odds by approximately two orders of magnitude. This reduction is largely due to substantial Occam factors induced by the LEP and LHC Higgs searches. (orig.)
MOOC Success Factors: Proposal of an Analysis Framework
Directory of Open Access Journals (Sweden)
Margarida M. Marques
2017-10-01
Full Text Available Aim/Purpose: From an idea of lifelong-learning-for-all to a phenomenon affecting higher education, Massive Open Online Courses (MOOCs can be the next step to a truly universal education. Indeed, MOOC enrolment rates can be astoundingly high; still, their completion rates are frequently disappointingly low. Nevertheless, as courses, the participants’ enrolment and learning within the MOOCs must be considered when assessing their success. In this paper, the authors’ aim is to reflect on what makes a MOOC successful to propose an analysis framework of MOOC success factors. Background: A literature review was conducted to identify reported MOOC success factors and to propose an analysis framework. Methodology: This literature-based framework was tested against data of a specific MOOC and refined, within a qualitative interpretivist methodology. The data were collected from the ‘As alterações climáticas nos média escolares - Clima@EduMedia’ course, which was developed by the project Clima@EduMedia and was submitted to content analysis. This MOOC aimed to support science and school media teachers in the use of media to teach climate change Contribution: By proposing a MOOC success factors framework the authors are attempting to contribute to fill in a literature gap regarding what concerns criteria to consider a specific MOOC successful. Findings: This work major finding is a literature-based and empirically-refined MOOC success factors analysis framework. Recommendations for Practitioners: The proposed framework is also a set of best practices relevant to MOOC developers, particularly when targeting teachers as potential participants. Recommendation for Researchers: This work’s relevance is also based on its contribution to increasing empirical research on MOOCs. Impact on Society: By providing a proposal of a framework on factors to make a MOOC successful, the authors hope to contribute to the quality of MOOCs. Future Research: Future
Bayes factor design analysis: Planning for compelling evidence.
Schönbrodt, Felix D; Wagenmakers, Eric-Jan
2018-02-01
A sizeable literature exists on the use of frequentist power analysis in the null-hypothesis significance testing (NHST) paradigm to facilitate the design of informative experiments. In contrast, there is almost no literature that discusses the design of experiments when Bayes factors (BFs) are used as a measure of evidence. Here we explore Bayes Factor Design Analysis (BFDA) as a useful tool to design studies for maximum efficiency and informativeness. We elaborate on three possible BF designs, (a) a fixed-n design, (b) an open-ended Sequential Bayes Factor (SBF) design, where researchers can test after each participant and can stop data collection whenever there is strong evidence for either [Formula: see text] or [Formula: see text], and (c) a modified SBF design that defines a maximal sample size where data collection is stopped regardless of the current state of evidence. We demonstrate how the properties of each design (i.e., expected strength of evidence, expected sample size, expected probability of misleading evidence, expected probability of weak evidence) can be evaluated using Monte Carlo simulations and equip researchers with the necessary information to compute their own Bayesian design analyses.
Inference algorithms and learning theory for Bayesian sparse factor analysis
International Nuclear Information System (INIS)
Rattray, Magnus; Sharp, Kevin; Stegle, Oliver; Winn, John
2009-01-01
Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.
Inference algorithms and learning theory for Bayesian sparse factor analysis
Energy Technology Data Exchange (ETDEWEB)
Rattray, Magnus; Sharp, Kevin [School of Computer Science, University of Manchester, Manchester M13 9PL (United Kingdom); Stegle, Oliver [Max-Planck-Institute for Biological Cybernetics, Tuebingen (Germany); Winn, John, E-mail: magnus.rattray@manchester.ac.u [Microsoft Research Cambridge, Roger Needham Building, Cambridge, CB3 0FB (United Kingdom)
2009-12-01
Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.
Statictical Analysis Of The Conditioning Factors Of Urban Electric Consumption
International Nuclear Information System (INIS)
Segura D'Rouville, Juan Joel; Suárez Carreño, Franyelit María
2017-01-01
This research work presents the analysis of the most important factors that condition the urban residential electricity consumption. This study shows the quantitative parameters conditioning the electricity consumption. This sector of analysis has been chosen because there is disaggregated information of which are the main social and technological factors that determine its behavior, growth, with the objective of elaborating policies in the management of the electric consumption. The electrical demand considered as the sum of the powers of all the equipment that are used in each of the instants of a full day, is related to the electrical consumption, which is not but the value of the power demanded by a determined consumer Multiplied by the time in which said demand is maintained. In this report we propose the design of a probabilistic model of prediction of electricity consumption, taking into account mainly influential social and technological factors. The statistical process of this database is done through the Stat Graphics software version 4.1, for its extensive didactic in the accomplishment of calculations and associated methods. Finally, the correlation of the variables was performed to classify the determinants in a specific way and thus to determine the consumption of the dwellings. (author)
Analysis of risk factors of pulmonary embolism in diabetic patients
International Nuclear Information System (INIS)
Xie Changhui; Ma Zhihai; Zhu Lin; Chi Lianxiang
2012-01-01
Objective: To study the related risk factors in diabetic patients with pulmonary embolism (PE). Methods: 58 diabetic cases underwent lower limbs 99m Tc-MAA veins imaging (and/or ultrasonography) and pulmonary perfusion imaging. The related laboratory data [fasting blood glucose (FBG), blood cholesterol, blood long chain triglycerides (LCT)] and clinic information [age, disease courses, chest symptoms (chest pain and short of breathe), lower limbs symptoms (swelling, varicose veins and diabetic foot) and acute complication (diabetic ketoacidosis and hyperosmolar non ketotic diabetic coma)] were collected simultaneously. SPSS was used for χ 2 -test and Logistic regression analysis. Results: (1) 28 patients (48.3%) were showed to be with lower limbs deep vein thrombosis (DVT) and by 99m Tc-MAA imaging, 10 cases (17.2%) with PE. The PE ratios (32.1%) of the patients with DVT was more higher than no DVT (3.3%) (χ 2 =6.53, P 2 ≥4.23, P 2 ≤2.76, P>0.05), respectively. (3) Multiplicity analysis indicated: the related risk factors for PE included chest symptoms (Score=13.316, P=0.000) and lower limbs symptoms (Score=7.780, P=0.005). No significant difference to other factors (Score≤2.494, P>0.114), respectively. Conclusion: The serious DM with chest symptoms, lower limbs symptoms and/or DVT must be controlled as early as possible by all kinds of treatment. It will decrease the PE complication. (authors)
Contextual risk factors for low birth weight: a multilevel analysis.
Directory of Open Access Journals (Sweden)
Gbenga A Kayode
Full Text Available Low birth weight (LBW remains to be a leading cause of neonatal death and a major contributor to infant and under-five mortality. Its prevalence has not declined in the last decade in sub-Saharan Africa (SSA and Asia. Some individual level factors have been identified as risk factors for LBW but knowledge is limited on contextual risk factors for LBW especially in SSA.Contextual risk factors for LBW in Ghana were identified by performing multivariable multilevel logistic regression analysis of 6,900 mothers dwelling in 412 communities that participated in the 2003 and 2008 Demographic and Health Surveys in Ghana.Contextual-level factors were significantly associated with LBW: Being a rural dweller increased the likelihood of having a LBW infant by 43% (OR 1.43; 95% CI 1.01-2.01; P-value <0.05 while living in poverty-concentrated communities increased the risk of having a LBW infant twofold (OR 2.16; 95% CI 1.29-3.61; P-value <0.01. In neighbourhoods with a high coverage of safe water supply the odds of having a LBW infant reduced by 28% (OR 0.74; 95% CI 0.57-0.96; P-value <0.05.This study showed contextual risk factors to have independent effects on the prevalence of LBW infants. Being a rural dweller, living in a community with a high concentration of poverty and a low coverage of safe water supply were found to increase the prevalence of LBW infants. Implementing appropriate community-based intervention programmes will likely reduce the occurrence of LBW infants.
International Nuclear Information System (INIS)
Sundberg, Gunnel
2001-01-01
A deregulation of the electricity market in Europe will result in increased competition among the power-producing companies. They will therefore carefully estimate the financial risk in an investment in new power-producing capability. One part of the risk assessment is to perform a sensitivity analysis. This paper presents a sensitivity analysis using factorial design, resulting in an assessment of the most important technical and economical factors affecting an investment in gas turbine combined cycle and a steam cycle fired by wood chips. The study is performed using a simulation model that optimises the operation of existing power plants and potential new investments to fulfil the desired heat demand. The local utility system analysed is a Swedish district heating system with 655 GWh y -1 heat demand. The conclusion is that to understand which of the technical and economical factors affect the investment, it is not sufficient to investigate the parameters of the studied plant, but also the parameters related to the competing plants. Both the individual effects of the factors and the effect of their interaction should be investigated. For the energy system studied the price of natural gas, price of wood chips and investment cost have the major influence on the profitability of the investment. (Author)
Factors influencing societal response of nanotechnology: an expert stakeholder analysis
Energy Technology Data Exchange (ETDEWEB)
Gupta, Nidhi, E-mail: nidhi.gupta@wur.nl; Fischer, Arnout R. H., E-mail: arnout.fischer@wur.nl; Lans, Ivo A. van der, E-mail: Ivo.vanderLans@wur.nl [Wageningen University, Marketing and Consumer Behaviour Group (Netherlands); Frewer, Lynn J., E-mail: lynn.frewer@newcastle.ac.uk [Newcastle University, School of Agriculture, Food and Rural Development (United Kingdom)
2012-05-15
Nanotechnology can be described as an emerging technology and, as has been the case with other emerging technologies such as genetic modification, different socio-psychological factors will potentially influence societal responses to its development and application. These factors will play an important role in how nanotechnology is developed and commercialised. This article aims to identify expert opinion on factors influencing societal response to applications of nanotechnology. Structured interviews with experts on nanotechnology from North West Europe were conducted using repertory grid methodology in conjunction with generalized Procrustes analysis to examine the psychological constructs underlying societal uptake of 15 key applications of nanotechnology drawn from different areas (e.g. medicine, agriculture and environment, chemical, food, military, sports, and cosmetics). Based on expert judgement, the main factors influencing societal response to different applications of nanotechnology will be the extent to which applications are perceived to be beneficial, useful, and necessary, and how 'real' and physically close to the end-user these applications are perceived to be by the public.
Exploring leadership styles for innovation: an exploratory factor analysis
Directory of Open Access Journals (Sweden)
Wipulanusat Warit
2017-03-01
Full Text Available Leadership plays a vital role in building the process, structures, and climate for an organisation to become innovative and to motivate team expectations toward innovations. This study explores the leadership styles that engineers regard as significant for innovation in the public sector. Exploratory factor analysis (EFA was conducted to identify the principal leadership styles influencing innovation in the Australian Public Service (APS, using survey data extracted from the 2014 APS employee census comprising 3 125 engineering professionals in Commonwealth of Australia departments. EFA returned a two-factor structure explaining 77.6% of the variance of the leadership for innovation construct. In this study, the results from the EFA provided a clear estimation of the factor structure of the measures for leadership for innovation. From the results, the two factors extracted were transformational leadership and consideration leadership. In transformational leadership, a leader values organisational objectives, inspires subordinates to perform, and motivates followers beyond expected levels of work standards. Consideration leadership refers to the degree to which a leader shows concern and expressions of support for subordinates, takes care of their welfare, treats members as equals, and displays warmth and approachability. These findings highlight the role of leadership as the most critical predictor when considering the degree to which subordinates strive for creativity and innovation. Both transformational and consideration leadership styles are recommended to be incorporated into management training and development programs. This study also recommends that Commonwealth departments recruit supervisors who have both of these leadership styles before implementing innovative projects.
Factors influencing societal response of nanotechnology: an expert stakeholder analysis
International Nuclear Information System (INIS)
Gupta, Nidhi; Fischer, Arnout R. H.; Lans, Ivo A. van der; Frewer, Lynn J.
2012-01-01
Nanotechnology can be described as an emerging technology and, as has been the case with other emerging technologies such as genetic modification, different socio-psychological factors will potentially influence societal responses to its development and application. These factors will play an important role in how nanotechnology is developed and commercialised. This article aims to identify expert opinion on factors influencing societal response to applications of nanotechnology. Structured interviews with experts on nanotechnology from North West Europe were conducted using repertory grid methodology in conjunction with generalized Procrustes analysis to examine the psychological constructs underlying societal uptake of 15 key applications of nanotechnology drawn from different areas (e.g. medicine, agriculture and environment, chemical, food, military, sports, and cosmetics). Based on expert judgement, the main factors influencing societal response to different applications of nanotechnology will be the extent to which applications are perceived to be beneficial, useful, and necessary, and how 'real' and physically close to the end-user these applications are perceived to be by the public.
Factoral analysis of the cost of preparing oil
Energy Technology Data Exchange (ETDEWEB)
Avdeyeva, L A; Kudoyarov, G Sh; Shmatova, M F
1979-01-01
Mathematical statistics methods (basically correlational and regression analysis) are used to study the factors which form the level of cost of preparing oil with consideration of the mutual influence of the factors. Selected as the claims for inclusion into a mathematical model was a group of five a priori justified factors: the water level of the oil being extracted (%); the specific expenditure of deemulsifiers; the volume of oil preparation; the quality of oil preparation (the salt content) and the level of use of the installations' capacities (%). To construct an economic and mathematical model of the cost of the technical preparation (SPP) of the oil, all the unions which make up the Ministry of the Oil Industry were divided into two comparable totalities. The first group included unions in which the oil SPP was lower than the branch average and the second, unions in which the SPP was higher than the branch wide cost. Using the coefficients of regression, special elasticity coefficients and the fluctuation indicators, the basic factors were finally identified which have the greatest influence on the formation of the oil SPP level separately for the first and second groups of unions.
Factors influencing societal response of nanotechnology: an expert stakeholder analysis
Gupta, Nidhi; Fischer, Arnout R. H.; van der Lans, Ivo A.; Frewer, Lynn J.
2012-05-01
Nanotechnology can be described as an emerging technology and, as has been the case with other emerging technologies such as genetic modification, different socio-psychological factors will potentially influence societal responses to its development and application. These factors will play an important role in how nanotechnology is developed and commercialised. This article aims to identify expert opinion on factors influencing societal response to applications of nanotechnology. Structured interviews with experts on nanotechnology from North West Europe were conducted using repertory grid methodology in conjunction with generalized Procrustes analysis to examine the psychological constructs underlying societal uptake of 15 key applications of nanotechnology drawn from different areas (e.g. medicine, agriculture and environment, chemical, food, military, sports, and cosmetics). Based on expert judgement, the main factors influencing societal response to different applications of nanotechnology will be the extent to which applications are perceived to be beneficial, useful, and necessary, and how 'real' and physically close to the end-user these applications are perceived to be by the public.
Path analysis of risk factors leading to premature birth.
Fields, S J; Livshits, G; Sirotta, L; Merlob, P
1996-01-01
The present study tested whether various sociodemographic, anthropometric, behavioral, and medical/physiological factors act in a direct or indirect manner on the risk of prematurity using path analysis on a sample of Israeli births. The path model shows that medical complications, primarily toxemia, chorioammionitis, and a previous low birth weight delivery directly and significantly act on the risk of prematurity as do low maternal pregnancy weight gain and ethnicity. Other medical complications, including chronic hypertension, preclampsia, and placental abruption, although significantly correlated with prematurity, act indirectly on prematurity through toxemia. The model further shows that the commonly accepted sociodemographic, anthropometric, and behavioral risk factors act by modifying the development of medical complications that lead to prematurity as opposed to having a direct effect on premature delivery. © 1996 Wiley-Liss, Inc. Copyright © 1996 Wiley-Liss, Inc.
Pareto analysis of critical factors affecting technical institution evaluation
Directory of Open Access Journals (Sweden)
Victor Gambhir
2012-08-01
Full Text Available With the change of education policy in 1991, more and more technical institutions are being set up in India. Some of these institutions provide quality education, but others are merely concentrating on quantity. These stakeholders are in a state of confusion about decision to select the best institute for their higher educational studies. Although various agencies including print media provide ranking of these institutions every year, but their results are controversial and biased. In this paper, the authors have made an endeavor to find the critical factors for technical institution evaluation from literature survey. A Pareto analysis has also been performed to find the intensity of these critical factors in evaluation. This will not only help the stake holders in taking right decisions but will also help the management of institutions in benchmarking for identifying the most important critical areas to improve the existing system. This will in turn help Indian economy.
Multivariate factor analysis of Girgentana goat milk composition
Directory of Open Access Journals (Sweden)
Pietro Giaccone
2010-01-01
Full Text Available The interpretation of the several variables that contribute to defining milk quality is difficult due to the high degree of correlation among them. In this case, one of the best methods of statistical processing is factor analysis, which belongs to the multivariate groups; for our study this particular statistical approach was employed. A total of 1485 individual goat milk samples from 117 Girgentana goats, were collected fortnightly from January to July, and analysed for physical and chemical composition, and clotting properties. Milk pH and tritable acidity were within the normal range for fresh goat milk. Morning milk yield resulted 704 ± 323 g with 3.93 ± 1.23% and 3.48±0.38% for fat and protein percentages, respectively. The milk urea content was 43.70 ± 8.28 mg/dl. The clotting ability of Girgentana milk was quite good, with a renneting time equal to 16.96 ± 3.08 minutes, a rate of curd formation of 2.01 ± 1.63 min- utes and a curd firmness of 25.08 ± 7.67 millimetres. Factor analysis was performed by applying axis orthogonal rotation (rotation type VARIMAX; the analysis grouped the milk components into three latent or common factors. The first, which explained 51.2% of the total covariance, was defined as “slow milks”, because it was linked to r and pH. The second latent factor, which explained 36.2% of the total covariance, was defined as “milk yield”, because it is positively correlated to the morning milk yield and to the urea con- tent, whilst negatively correlated to the fat percentage. The third latent factor, which explained 12.6% of the total covari- ance, was defined as “curd firmness,” because it is linked to protein percentage, a30 and titatrable acidity. With the aim of evaluating the influence of environmental effects (stage of kidding, parity and type of kidding, factor scores were anal- ysed with the mixed linear model. Results showed significant effects of the season of
Phenotypic factor analysis of psychopathology reveals a new body-related transdiagnostic factor.
Pezzoli, Patrizia; Antfolk, Jan; Santtila, Pekka
2017-01-01
Comorbidity challenges the notion of mental disorders as discrete categories. An increasing body of literature shows that symptoms cut across traditional diagnostic boundaries and interact in shaping the latent structure of psychopathology. Using exploratory and confirmatory factor analysis, we reveal the latent sources of covariation among nine measures of psychopathological functioning in a population-based sample of 13024 Finnish twins and their siblings. By implementing unidimensional, multidimensional, second-order, and bifactor models, we illustrate the relationships between observed variables, specific, and general latent factors. We also provide the first investigation to date of measurement invariance of the bifactor model of psychopathology across gender and age groups. Our main result is the identification of a distinct "Body" factor, alongside the previously identified Internalizing and Externalizing factors. We also report relevant cross-disorder associations, especially between body-related psychopathology and trait anger, as well as substantial sex and age differences in observed and latent means. The findings expand the meta-structure of psychopathology, with implications for empirical and clinical practice, and demonstrate shared mechanisms underlying attitudes towards nutrition, self-image, sexuality and anger, with gender- and age-specific features.
Improving Your Exploratory Factor Analysis for Ordinal Data: A Demonstration Using FACTOR
Directory of Open Access Journals (Sweden)
James Baglin
2014-06-01
Full Text Available Exploratory factor analysis (EFA methods are used extensively in the field of assessment and evaluation. Due to EFA's widespread use, common methods and practices have come under close scrutiny. A substantial body of literature has been compiled highlighting problems with many of the methods and practices used in EFA, and, in response, many guidelines have been proposed with the aim to improve application. Unfortunately, implementing recommended EFA practices has been restricted by the range of options available in commercial statistical packages and, perhaps, due to an absence of clear, practical - how-to' demonstrations. Consequently, this article describes the application of methods recommended to get the most out of your EFA. The article focuses on dealing with the common situation of analysing ordinal data as derived from Likert-type scales. These methods are demonstrated using the free, stand-alone, easy-to-use and powerful EFA package FACTOR (http://psico.fcep.urv.es/utilitats/factor/, Lorenzo-Seva & Ferrando, 2006. The demonstration applies the recommended techniques using an accompanying dataset, based on the Big 5 personality test. The outcomes obtained by the EFA using the recommended procedures through FACTOR are compared to the default techniques currently available in SPSS.
Spinal appearance questionnaire: factor analysis, scoring, reliability, and validity testing.
Carreon, Leah Y; Sanders, James O; Polly, David W; Sucato, Daniel J; Parent, Stefan; Roy-Beaudry, Marjolaine; Hopkins, Jeffrey; McClung, Anna; Bratcher, Kelly R; Diamond, Beverly E
2011-08-15
Cross sectional. This study presents the factor analysis of the Spinal Appearance Questionnaire (SAQ) and its psychometric properties. Although the SAQ has been administered to a large sample of patients with adolescent idiopathic scoliosis (AIS) treated surgically, its psychometric properties have not been fully evaluated. This study presents the factor analysis and scoring of the SAQ and evaluates its psychometric properties. The SAQ and the Scoliosis Research Society-22 (SRS-22) were administered to AIS patients who were being observed, braced or scheduled for surgery. Standard demographic data and radiographic measures including Lenke type and curve magnitude were also collected. Of the 1802 patients, 83% were female; with a mean age of 14.8 years and mean initial Cobb angle of 55.8° (range, 0°-123°). From the 32 items of the SAQ, 15 loaded on two factors with consistent and significant correlations across all Lenke types. There is an Appearance (items 1-10) and an Expectations factor (items 12-15). Responses are summed giving a range of 5 to 50 for the Appearance domain and 5 to 20 for the Expectations domain. The Cronbach's α was 0.88 for both domains and Total score with a test-retest reliability of 0.81 for Appearance and 0.91 for Expectations. Correlations with major curve magnitude were higher for the SAQ Appearance and SAQ Total scores compared to correlations between the SRS Appearance and SRS Total scores. The SAQ and SRS-22 Scores were statistically significantly different in patients who were scheduled for surgery compared to those who were observed or braced. The SAQ is a valid measure of self-image in patients with AIS with greater correlation to curve magnitude than SRS Appearance and Total score. It also discriminates between patients who require surgery from those who do not.
Physics Metacognition Inventory Part II: Confirmatory factor analysis and Rasch analysis
Taasoobshirazi, Gita; Bailey, MarLynn; Farley, John
2015-11-01
The Physics Metacognition Inventory was developed to measure physics students' metacognition for problem solving. In one of our earlier studies, an exploratory factor analysis provided evidence of preliminary construct validity, revealing six components of students' metacognition when solving physics problems including knowledge of cognition, planning, monitoring, evaluation, debugging, and information management. The college students' scores on the inventory were found to be reliable and related to students' physics motivation and physics grade. However, the results of the exploratory factor analysis indicated that the questionnaire could be revised to improve its construct validity. The goal of this study was to revise the questionnaire and establish its construct validity through a confirmatory factor analysis. In addition, a Rasch analysis was applied to the data to better understand the psychometric properties of the inventory and to further evaluate the construct validity. Results indicated that the final, revised inventory is a valid, reliable, and efficient tool for assessing student metacognition for physics problem solving.
Hoseinzade, Zohre; Mokhtari, Ahmad Reza
2017-10-01
Large numbers of variables have been measured to explain different phenomena. Factor analysis has widely been used in order to reduce the dimension of datasets. Additionally, the technique has been employed to highlight underlying factors hidden in a complex system. As geochemical studies benefit from multivariate assays, application of this method is widespread in geochemistry. However, the conventional protocols in implementing factor analysis have some drawbacks in spite of their advantages. In the present study, a geochemical dataset including 804 soil samples collected from a mining area in central Iran in order to search for MVT type Pb-Zn deposits was considered to outline geochemical analysis through various fractal methods. Routine factor analysis, sequential factor analysis, and staged factor analysis were applied to the dataset after opening the data with (additive logratio) alr-transformation to extract mineralization factor in the dataset. A comparison between these methods indicated that sequential factor analysis has more clearly revealed MVT paragenesis elements in surface samples with nearly 50% variation in F1. In addition, staged factor analysis has given acceptable results while it is easy to practice. It could detect mineralization related elements while larger factor loadings are given to these elements resulting in better pronunciation of mineralization.
A Retrospective Analysis of Factors Affecting Early Stoma Complications.
Koc, Umit; Karaman, Kerem; Gomceli, Ismail; Dalgic, Tahsin; Ozer, Ilter; Ulas, Murat; Ercan, Metin; Bostanci, Erdal; Akoglu, Musa
2017-01-01
Despite advances in surgical techniques and products for stoma care, stoma-related complications are still common. A retrospective analysis was performed of the medical records of 462 consecutive patients (295 [63.9%] female, 167 [36.1 %] male, mean age 55.5 ± 15.1 years, mean body mass index [BMI] 25.1 ± 5.2) who had undergone stoma creation at the Gastroenterological Surgery Clinic of Turkiye Yuksek İhtisas Teaching and Research Hospital between January 2008 and December 2012 to examine the incidence of early (ie, within 30 days after surgery) stoma complications and identify potential risk factors. Variables abstracted included gender, age, and BMI; existence of malignant disease; comorbidities (diabetes mellitus, hypertension, coronary artery disease, chronic respiratory disease); use of neoadjuvant chemoradiotherapy; permanent or temporary stoma; type of stoma (loop/end stoma); stoma localization; and the use of preoperative marking of the stoma site. Data were entered and analyzed using statistical software. Descriptive statistics, chi-squared, and Mann-Whitney U tests were used to describe and analyze all variables, and logistic regression analysis was used to determine independent risk factors for stoma complications. Ostomy-related complications developed in 131 patients (28.4%) Of these, superficial mucocutaneous separation was the most frequent complication (90 patients, 19.5%), followed by stoma retraction (15 patients, 3.2%). In univariate analysis, malignant disease (P = .025), creation of a colostomy (P = .002), and left lower quadrant stoma location (P toma complication. Only stoma location was an independent risk factor for the development of a stoma complication (P = .044). The rate of stoma complications was not significantly different between patients who underwent nonemergent surgery (30% in patients preoperatively sited versus 28.4% not sited) and patients who underwent emergency surgery (27.1%). Early stoma complication rates were higher
Replica Analysis for Portfolio Optimization with Single-Factor Model
Shinzato, Takashi
2017-06-01
In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.
Meta-analysis of the predictive factors of postpartum fatigue.
Badr, Hanan A; Zauszniewski, Jaclene A
2017-08-01
Nearly 64% of new mothers are affected by fatigue during the postpartum period, making it the most common problem that a woman faces as she adapts to motherhood. Postpartum fatigue can lead to serious negative effects on the mother's health and the newborn's development and interfere with mother-infant interaction. The aim of this meta-analysis was to identify predictive factors of postpartum fatigue and to document the magnitude of their effects using effect sizes. We used two search engines, PubMed and Google Scholar, to identify studies that met three inclusion criteria: (a) the article was written in English, (b) the article studied the predictive factors of postpartum fatigue, and (c) the article included information about the validity and reliability of the instruments used in the research. Nine articles met these inclusion criteria. The direction and strength of correlation coefficients between predictive factors and postpartum fatigue were examined across the studies to determine their effect sizes. Measurement of predictor variables occurred from 3days to 6months postpartum. Correlations reported between predictive factors and postpartum fatigue were as follows: small effect size (r range =0.10 to 0.29) for education level, age, postpartum hemorrhage, infection, and child care difficulties; medium effect size (r range =0.30 to 0.49) for physiological illness, low ferritin level, low hemoglobin level, sleeping problems, stress and anxiety, and breastfeeding problems; and large effect size (r range =0.50+) for depression. Postpartum fatigue is a common condition that can lead to serious health problems for a new mother and her newborn. Therefore, increased knowledge concerning factors that influence the onset of postpartum fatigue is needed for early identification of new mothers who may be at risk. Appropriate treatments, interventions, information, and support can then be initiated to prevent or minimize the postpartum fatigue. Copyright © 2017 Elsevier
Information technology portfolio in supply chain management using factor analysis
Directory of Open Access Journals (Sweden)
Ahmad Jaafarnejad
2013-11-01
Full Text Available The adoption of information technology (IT along with supply chain management (SCM has become increasingly a necessity among most businesses. This enhances supply chain (SC performance and helps companies achieve the organizational competitiveness. IT systems capture and analyze information and enable management to make decisions by considering a global scope across the entire SC. This paper reviews the existing literature on IT in SCM and considers pertinent criteria. Using principal component analysis (PCA of factor analysis (FA, a number of related criteria are divided into smaller groups. Finally, SC managers can develop an IT portfolio in SCM using mean values of few extracted components on the relevance –emergency matrix. A numerical example is provided to explain details of the proposed method.
Analysis of Factors Affecting Inflation in Indonesia: an Islamic Perspective
Directory of Open Access Journals (Sweden)
Elis Ratna Wulan
2015-04-01
Full Text Available This study aims to determine the factors affecting inflation. The research is descriptive quantitative in nature. The data used are reported exchange rates, interest rates, money supply and inflation during 2008-2012. The research data was analyzed using multiple linear regression analysis. The results showed in the year 2008-2012 the condition of each variable are (1 the rate of inflation has a negative trend, (2 the interest rate has a negative trend, (3 the money supply has a positive trend, (4 the value of exchange rate has a positive trend. The test results by using multiple linear regression analysis result that variable interest rates, the money supply and the exchange rate of the rupiah significant effect on the rate of inflation.
Clinical usefulness of physiological components obtained by factor analysis
International Nuclear Information System (INIS)
Ohtake, Eiji; Murata, Hajime; Matsuda, Hirofumi; Yokoyama, Masao; Toyama, Hinako; Satoh, Tomohiko.
1989-01-01
The clinical usefulness of physiological components obtained by factor analysis was assessed in 99m Tc-DTPA renography. Using definite physiological components, another dynamic data could be analyzed. In this paper, the dynamic renal function after ESWL (Extracorporeal Shock Wave Lithotripsy) treatment was examined using physiological components in the kidney before ESWL and/or a normal kidney. We could easily evaluate the change of renal functions by this method. The usefulness of a new analysis using physiological components was summarized as follows: 1) The change of a dynamic function could be assessed in quantity as that of the contribution ratio. 2) The change of a sick condition could be morphologically evaluated as that of the functional image. (author)
Gideon P. de Bruin
2000-01-01
The scores of 700 Afrikaans-speaking university students on the Comrey Personality Scales and the 16 Personality Factor Questionnaire were subjected to an inter-battery factor analysis. This technique uses only the correlations between two sets of variables and reveals only the factors that they have in common. Three of the Big Five personality factors were revealed, namely Extroversion, Neuroticism and Conscientiousness. However, the Conscientiousness factor contained a relatively strong uns...
Parallel factor analysis PARAFAC of process affected water
Energy Technology Data Exchange (ETDEWEB)
Ewanchuk, A.M.; Ulrich, A.C.; Sego, D. [Alberta Univ., Edmonton, AB (Canada). Dept. of Civil and Environmental Engineering; Alostaz, M. [Thurber Engineering Ltd., Calgary, AB (Canada)
2010-07-01
A parallel factor analysis (PARAFAC) of oil sands process-affected water was presented. Naphthenic acids (NA) are traditionally described as monobasic carboxylic acids. Research has indicated that oil sands NA do not fit classical definitions of NA. Oil sands organic acids have toxic and corrosive properties. When analyzed by fluorescence technology, oil sands process-affected water displays a characteristic peak at 290 nm excitation and approximately 346 nm emission. In this study, a parallel factor analysis (PARAFAC) was used to decompose process-affected water multi-way data into components representing analytes, chemical compounds, and groups of compounds. Water samples from various oil sands operations were analyzed in order to obtain EEMs. The EEMs were then arranged into a large matrix in decreasing process-affected water content for PARAFAC. Data were divided into 5 components. A comparison with commercially prepared NA samples suggested that oil sands NA is fundamentally different. Further research is needed to determine what each of the 5 components represent. tabs., figs.
Confirmatory Factor Analysis of the ISB - Burnout Syndrome Inventory
Directory of Open Access Journals (Sweden)
Ana Maria T. Benevides-Pereira
2017-05-01
Full Text Available AimBurnout is a dysfunctional reaction to chronic occupational stress. The present study analysis the psychometric qualities of the Burnout Syndrome Inventory (ISB through Confirmatory Factor Analysis (CFA.MethodEmpirical study in a multi-centre and multi-occupational sample (n = 701 using the ISB. The Part I assesses antecedent factors: Positive Organizational Conditions (PC and Negative Organizational Conditions (NC. The Part II assesses the syndrome: Emotional Exhaustion (EE, Dehumanization (DE, Emotional Distancing (ED and Personal Accomplishment (PA.ResultsThe highest means occurred in the positive scales CP (M = 23.29, SD = 5.89 and PA (M = 14.84, SD = 4.71. Negative conditions showed the greatest variability (SD = 6.03. Reliability indexes were reasonable, with the lowest rate at .77 for DE and the highest rate .91 for PA. The CFA revealed RMSEA = .057 and CFI = .90 with all scales regressions showing significant values (β = .73 until β = .92.ConclusionThe ISB showed a plausible instrument to evaluate burnout. The two sectors maintained the initial model and confirmed the theoretical presupposition. This instrument makes possible a more comprehensive idea of the labour context, and one or another part may be used separately according to the needs and the aims of the assessor.
Confirmatory factor analysis of the Competitive State Anxiety Inventory-2.
Lane, A M; Sewell, D F; Terry, P C; Bartram, D; Nesti, M S
1999-06-01
The aim of this study was to evaluate the factor structure of the Competitive State Anxiety Inventory-2 (CSAI-2) using confirmatory factor analysis. Volunteer participants (n = 1213) completed the CSAI-2 approximately 1 h before competition and the data were analysed in two samples. The hypothesized model showed poor fit indices in both samples independently (Robust Comparative Fit Index: sample A = 0.82, sample B = 0.84) and simultaneously (Comparative Fit Index = 0.83), suggesting that the factor structure proposed by Martens et al. is flawed. Our findings suggest that a limitation of the Cognitive Anxiety scale derives from phrasing items around the word 'concerned' rather than 'worried'. We suggest that being concerned about an impending performance does not necessarily mean that an athlete is experiencing negative thoughts, but that the athlete is acknowledging the importance and difficulty of the challenge and is attempting to mobilize resources to cope. The present results question the use of the CSAI-2 as a valid measure of competitive state anxiety.
Analysis of transfer reactions: determination of spectroscopic factors
Energy Technology Data Exchange (ETDEWEB)
Keeley, N. [CEA Saclay, Dept. d' Astrophysique, de Physique des Particules de Physique Nucleaire et de l' Instrumentation Associee (DSM/DAPNIA/SPhN), 91- Gif sur Yvette (France); The Andrzej So an Institute for Nuclear Studies, Dept. of Nuclear Reactions, Warsaw (Poland)
2007-07-01
An overview of the most popular models used for the analysis of direct reaction data is given, concentrating on practical aspects. The 4 following models (in order of increasing sophistication): the distorted wave born approximation (DWBA), the adiabatic model, the coupled channels born approximation, and the coupled reaction channels are briefly described. As a concrete example, the C{sup 12}(d,p)C{sup 13} reaction at an incident deuteron energy of 30 MeV is analysed with progressively more physically sophisticated models. The effect of the choice of the reaction model on the spectroscopic information extracted from the data is investigated and other sources of uncertainty in the derived spectroscopic factors are discussed. We have showed that the choice of the reaction model can significantly influence the nuclear structure information, particularly the spectroscopic factors or amplitudes but occasionally also the spin-parity, that we wish to extract from direct reaction data. We have also demonstrated that the DWBA can fail to give a satisfactory description of transfer data but when the tenets of the theory are fulfilled DWBA can work very well and will yield the same results as most sophisticated models. The use of global rather than fitted optical potentials can also lead to important differences in the extracted spectroscopic factors.
Worldwide analysis of marine oil spill cleanup cost factors
International Nuclear Information System (INIS)
Etkin, D.S.
2000-01-01
The many factors that influence oil spill response costs were discussed with particular emphasis on how spill responses differ around the world because of differing cultural values, socio-economic factors and labor costs. This paper presented an analysis of marine oil spill cleanup costs based on the country, proximity to shoreline, spill size, oil type, degree of shoreline oiling and cleanup methodology. The objective was to determine how each factor impacts per-unit cleanup costs. Near-shore spills and in-port spills were found to be 4-5 times more expensive to clean than offshore spills. Responses to spills of heavy fuels also cost 10 times more than for lighter crudes and diesel. Spill responses for spills under 30 tonnes are 10 times more costly than on a per-unit basis, for spills of 300 tonnes. A newly developed modelling technique that can be used on different types of marine spills was described. It is based on updated cost data acquired from case studies of more than 300 spills in 40 countries. The model determines a per-unit cleanup cost estimation by taking into consideration oil type, location, spill size, cleanup methodology, and shoreline oiling. It was concluded that the actual spill costs are totally dependent on the actual circumstances of the spill. 13 refs., 10 tabs., 3 figs
Container Throughput Forecasting Using Dynamic Factor Analysis and ARIMAX Model
Directory of Open Access Journals (Sweden)
Marko Intihar
2017-11-01
Full Text Available The paper examines the impact of integration of macroeconomic indicators on the accuracy of container throughput time series forecasting model. For this purpose, a Dynamic factor analysis and AutoRegressive Integrated Moving-Average model with eXogenous inputs (ARIMAX are used. Both methodologies are integrated into a novel four-stage heuristic procedure. Firstly, dynamic factors are extracted from external macroeconomic indicators influencing the observed throughput. Secondly, the family of ARIMAX models of different orders is generated based on the derived factors. In the third stage, the diagnostic and goodness-of-fit testing is applied, which includes statistical criteria such as fit performance, information criteria, and parsimony. Finally, the best model is heuristically selected and tested on the real data of the Port of Koper. The results show that by applying macroeconomic indicators into the forecasting model, more accurate future throughput forecasts can be achieved. The model is also used to produce future forecasts for the next four years indicating a more oscillatory behaviour in (2018-2020. Hence, care must be taken concerning any bigger investment decisions initiated from the management side. It is believed that the proposed model might be a useful reinforcement of the existing forecasting module in the observed port.
A Factor Analysis of The Social Interest Index--Revised.
Zarski, John J.; And Others
1983-01-01
Factor analyzed the Social Interest Index-Revised (SII-R), which measures levels of social interest attained in each of four life task areas. Four factors (N=308) were defined, i.e., a self-significance factor, a love factor, a friendship factor, and a work factor. Results support the empirical validity of the scale. (Author/PAS)
Analysis of nasopharyngeal carcinoma risk factors with Bayesian networks.
Aussem, Alex; de Morais, Sérgio Rodrigues; Corbex, Marilys
2012-01-01
We propose a new graphical framework for extracting the relevant dietary, social and environmental risk factors that are associated with an increased risk of nasopharyngeal carcinoma (NPC) on a case-control epidemiologic study that consists of 1289 subjects and 150 risk factors. This framework builds on the use of Bayesian networks (BNs) for representing statistical dependencies between the random variables. We discuss a novel constraint-based procedure, called Hybrid Parents and Children (HPC), that builds recursively a local graph that includes all the relevant features statistically associated to the NPC, without having to find the whole BN first. The local graph is afterwards directed by the domain expert according to his knowledge. It provides a statistical profile of the recruited population, and meanwhile helps identify the risk factors associated to NPC. Extensive experiments on synthetic data sampled from known BNs show that the HPC outperforms state-of-the-art algorithms that appeared in the recent literature. From a biological perspective, the present study confirms that chemical products, pesticides and domestic fume intake from incomplete combustion of coal and wood are significantly associated with NPC risk. These results suggest that industrial workers are often exposed to noxious chemicals and poisonous substances that are used in the course of manufacturing. This study also supports previous findings that the consumption of a number of preserved food items, like house made proteins and sheep fat, are a major risk factor for NPC. BNs are valuable data mining tools for the analysis of epidemiologic data. They can explicitly combine both expert knowledge from the field and information inferred from the data. These techniques therefore merit consideration as valuable alternatives to traditional multivariate regression techniques in epidemiologic studies. Copyright © 2011 Elsevier B.V. All rights reserved.
Order-constrained linear optimization.
Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P
2017-11-01
Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.
The scientific use of factor analysis in behavioral and life sciences
National Research Council Canada - National Science Library
Cattell, Raymond Bernard
1978-01-01
...; the choice of procedures in experimentation; factor interpretation; the relationship of factor analysis to broadened psychometric concepts such as scaling, validity, and reliability, and to higher- strata models...
Šebestová, Jarmila
2007-01-01
The small and medium sized entrepreneurship is often considered to be as a phenomenon of our times. Why many authors dedicated their work on this field? The main reason is that SME make influence on society life and contribute to economic development of the region, where they establish their business. The same situation is in Moravia-Silesian region, where the fac-tor analysis being applied. VRIO and Porter's analysis were used to interpret clearly research findings.
Investigation and analysis of aircrew ametropia and related factors
Directory of Open Access Journals (Sweden)
Li-Juan Zheng
2014-10-01
Full Text Available AIM: To investigate the refractive distribution and analysis risk factors for aircrew ametropia.METHODS: The number of 49 cases with ametropia from 1031 aircrew during May 2013 to May 2014 were reviewed. Various types of refraction composition, age, type, position, time of flight with the subjective assessment of aircrew were analyzed and compared. RESULTS: Of 49 cases, 43 cases(88%were myopia, 6 cases(12%were hypermetropia.,Detection rates were higher in age over 50 years aircrew and flight time more than 3000h. Detection rates were lower in self-conscious symptom heavy aircrew, fighter aircrew and good habit of using eyes. CONCLUSION: The myopia incidence in aircrew with age >50 years and long flight time is higher, than that of fighter pilots and good habit of using eyes. We should pay attention to the increasing late-onset myopia of aviators and habit of using eyes, work intensity and time of using eyes about aircrew.
[Rhabdomyolysis in a Bipolar Adolescent. Analysis of Associated Factors].
Restrepo, Diana; Montoya, Pablo; Giraldo, Laura; Gaviria, Génesis; Mejía, Catalina
2015-01-01
To describe a case of rhabdomyolysis associated with the use of quetiapine and lamotrigine in an adolescent treated for bipolar disorder. Description of the clinical case, analysis of the associated factors and a non-systematic review of the relevant literature. An 18 year old male, with bipolar disorder and treated pharmacologically with quetiapine and lamotrigine, after two weeks of physical activity presents with rhabdomyolysis. Quetiapine and exercise have been associated with rhabdomyolysis. The mediator mechanism of this association has not been found, although it has been established that there is neuromuscular dysfunction and an increase in sarcomere permeability. This clinical case allowed the complex interaction between antipsychotic agents and increased physical activity to be observed in a psychiatric adolescent patient, as well as the appearance of a potentially lethal medical complication. Copyright © 2014 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.
A Retrospective Analysis of Neonatal Encephalocele Predisposing Factors and Outcomes.
Yucetas, Seyho Cem; Uçler, Necati
2017-01-01
This study evaluates the predisposing factors and outcomes of surgical management of encephaloceles at our institution. A retrospective analysis of 32 occipital encephaloceles managed operatively at the Neurosurgery Department Clinics of the Faculty of Medicine, Adıyaman University, was performed between 2011 and 2015. Among the study population, 19 mothers had been exposed to TORCH infections (toxoplasma, rubella, cytomegalovirus, herpes simplex virus), 18 were in consanguineous marriages, and 3 had regular prenatal screening. Associated congenital anomalies were common. Eight infants required reoperation, and 9 died during follow-up. The study identified key areas for prevention. Knowledge of the intracranial and associated anomalies can guide management. © 2016 S. Karger AG, Basel.
Theory of sampling: four critical success factors before analysis.
Wagner, Claas; Esbensen, Kim H
2015-01-01
Food and feed materials characterization, risk assessment, and safety evaluations can only be ensured if QC measures are based on valid analytical data, stemming from representative samples. The Theory of Sampling (TOS) is the only comprehensive theoretical framework that fully defines all requirements to ensure sampling correctness and representativity, and to provide the guiding principles for sampling in practice. TOS also defines the concept of material heterogeneity and its impact on the sampling process, including the effects from all potential sampling errors. TOS's primary task is to eliminate bias-generating errors and to minimize sampling variability. Quantitative measures are provided to characterize material heterogeneity, on which an optimal sampling strategy should be based. Four critical success factors preceding analysis to ensure a representative sampling process are presented here.
Dispersive analysis of the scalar form factor of the nucleon
Hoferichter, M.; Ditsche, C.; Kubis, B.; Meißner, U.-G.
2012-06-01
Based on the recently proposed Roy-Steiner equations for pion-nucleon ( πN) scattering [1], we derive a system of coupled integral equations for the π π to overline N N and overline K K to overline N N S-waves. These equations take the form of a two-channel Muskhelishvili-Omnès problem, whose solution in the presence of a finite matching point is discussed. We use these results to update the dispersive analysis of the scalar form factor of the nucleon fully including overline K K intermediate states. In particular, we determine the correction {Δ_{σ }} = σ ( {2M_{π }^2} ) - {σ_{{π N}}} , which is needed for the extraction of the pion-nucleon σ term from πN scattering, as a function of pion-nucleon subthreshold parameters and the πN coupling constant.
Bayesian analysis of factors associated with fibromyalgia syndrome subjects
Jayawardana, Veroni; Mondal, Sumona; Russek, Leslie
2015-01-01
Factors contributing to movement-related fear were assessed by Russek, et al. 2014 for subjects with Fibromyalgia (FM) based on the collected data by a national internet survey of community-based individuals. The study focused on the variables, Activities-Specific Balance Confidence scale (ABC), Primary Care Post-Traumatic Stress Disorder screen (PC-PTSD), Tampa Scale of Kinesiophobia (TSK), a Joint Hypermobility Syndrome screen (JHS), Vertigo Symptom Scale (VSS-SF), Obsessive-Compulsive Personality Disorder (OCPD), Pain, work status and physical activity dependent from the "Revised Fibromyalgia Impact Questionnaire" (FIQR). The study presented in this paper revisits same data with a Bayesian analysis where appropriate priors were introduced for variables selected in the Russek's paper.
Analysis of factors affecting the effect of stope leaching
International Nuclear Information System (INIS)
Xie Wangnan; Dong Chunming
2014-01-01
The industrial test and industrial trial production of stope leaching were carried out at Taoshan orefield of Dabu deposit. The results of test and trial production showed obvious differences in leaching rate and leaching time. Compared with industrial trial production of stope leaching, the leaching rate of industrial test was higher, and leaching time was shorter. It was considered that the blasting method and liquid arrangement were the main factors affecting the leaching rate and leaching time according to analysis. So we put forward the following suggestions: the technique of deep hole slicing tight-face blasting was used to reduce the yield of lump ores, the effective liquid arrangement methods were adopted to make the lixiviant infiltrating throughout whole ore heap, and bacterial leaching was introduced. (authors)
Confirmatory Factor Analysis Alternative: Free, Accessible CBID Software.
Bott, Marjorie; Karanevich, Alex G; Garrard, Lili; Price, Larry R; Mudaranthakam, Dinesh Pal; Gajewski, Byron
2018-02-01
New software that performs Classical and Bayesian Instrument Development (CBID) is reported that seamlessly integrates expert (content validity) and participant data (construct validity) to produce entire reliability estimates with smaller sample requirements. The free CBID software can be accessed through a website and used by clinical investigators in new instrument development. Demonstrations are presented of the three approaches using the CBID software: (a) traditional confirmatory factor analysis (CFA), (b) Bayesian CFA using flat uninformative prior, and (c) Bayesian CFA using content expert data (informative prior). Outcomes of usability testing demonstrate the need to make the user-friendly, free CBID software available to interdisciplinary researchers. CBID has the potential to be a new and expeditious method for instrument development, adding to our current measurement toolbox. This allows for the development of new instruments for measuring determinants of health in smaller diverse populations or populations of rare diseases.
Measuring coalition functioning: refining constructs through factor analysis.
Brown, Louis D; Feinberg, Mark E; Greenberg, Mark T
2012-08-01
Internal and external coalition functioning is an important predictor of coalition success that has been linked to perceived coalition effectiveness, coalition goal achievement, coalition ability to support evidence-based programs, and coalition sustainability. Understanding which aspects of coalition functioning best predict coalition success requires the development of valid measures of empirically unique coalition functioning constructs. The goal of the present study is to examine and refine the psychometric properties of coalition functioning constructs in the following six domains: leadership, interpersonal relationships, task focus, participation benefits/costs, sustainability planning, and community support. The authors used factor analysis to identify problematic items in our original measure and then piloted new items and scales to create a more robust, psychometrically sound, multidimensional measure of coalition functioning. Scales displayed good construct validity through correlations with other measures. Discussion considers the strengths and weaknesses of the refined instrument.
Nonlinear Chance Constrained Problems: Optimality Conditions, Regularization and Solvers
Czech Academy of Sciences Publication Activity Database
Adam, Lukáš; Branda, Martin
2016-01-01
Roč. 170, č. 2 (2016), s. 419-436 ISSN 0022-3239 R&D Projects: GA ČR GA15-00735S Institutional support: RVO:67985556 Keywords : Chance constrained programming * Optimality conditions * Regularization * Algorithms * Free MATLAB codes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.289, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/adam-0460909.pdf
How market environment may constrain global franchising in emerging markets
Baena Graciá, Verónica
2011-01-01
Although emerging markets are some of the fastest growing economies in the world and represent countries that are experiencing a substantial economic transformation, little is known about the factors influencing country selection for expansion in those markets. In an attempt to enhance the knowledge that managers and scholars have on franchising expansion, the present study examines how market conditions may constrain international diffusion of franchising in emerging markets. They are: i) ge...
Postpartum Depression in Women: A Risk Factor Analysis.
Zaidi, Farheen; Nigam, Aruna; Anjum, Ruby; Agarwalla, Rashmi
2017-08-01
Postpartum Depression (PPD) is a known entity affecting not only the women but the whole family. It affects women more harshly and chronically due to their increased stress sensitivity, maladaptive coping strategies and multiple social roles in the community. To estimate the commonly associated risk factors of PPD among the women coming to a tertiary hospital in New Delhi, India. It was a longitudinal study conducted at the antenatal clinic for a period of one year. Total 260 women were screened at > 36 weeks of gestation, of which 149 postnatal women completed the questionnaire for PPD at six weeks of their delivery. The inform consent, demographical data and obstetrical details from each participant was taken before commencing the screening. Various risk factors and their association were determined by odds-ratio and significant association was accepted at order to identify the most important confounding variables, logistic regression analysis was used. PPD is a common mental health problem seen among the postnatal women as it was found in 12.75% (19 out of 149) of subjects at six weeks of their delivery. Moreover, it has significant association with the young maternal age (p-value=0.040), birth of the female child (p-value=0.015), previous stressful life events (p-value= 0.003), low self-esteem and feeling of loneliness (p-value=0.007). This study provides important information regarding the risk factors associated with development of PPD in this region of India. Female sex of the new born and the younger age play an important role in the development of PPD.
Analysis of vector boson production within TMD factorization
Energy Technology Data Exchange (ETDEWEB)
Scimemi, Ignazio [Universidad Complutense de Madrid, Departamento de Fisica Teorica, Madrid (Spain); Vladimirov, Alexey [Universitaet Regensburg, Institut fuer Theoretische Physik, Regensburg (Germany)
2018-02-15
We present a comprehensive analysis and extraction of the unpolarized transverse momentum dependent (TMD) parton distribution functions, which are fundamental constituents of the TMD factorization theorem. We provide a general review of the theory of TMD distributions, and present a new scheme of scale fixation. This scheme, called the ζ-prescription, allows to minimize the impact of perturbative logarithms in a large range of scales and does not generate undesired power corrections. Within ζ-prescription we consistently include the perturbatively calculable parts up to next-to-next-to-leading order (NNLO), and perform the global fit of the Drell-Yan and Z-boson production, which include the data of E288, Tevatron and LHC experiments. The non-perturbative part of the TMDs are explored checking a variety of models. We support the obtained results by a study of theoretical uncertainties, perturbative convergence, and a dedicated study of the range of applicability of the TMD factorization theorem. The considered non-perturbative models present significant differences in the fitting behavior, which allow us to clearly disfavor most of them. The numerical evaluations are provided by the arTeMiDe code, which is introduced in this work and that can be used for current/future TMD phenomenology. (orig.)
Analysis of vector boson production within TMD factorization
International Nuclear Information System (INIS)
Scimemi, Ignazio; Vladimirov, Alexey
2018-01-01
We present a comprehensive analysis and extraction of the unpolarized transverse momentum dependent (TMD) parton distribution functions, which are fundamental constituents of the TMD factorization theorem. We provide a general review of the theory of TMD distributions, and present a new scheme of scale fixation. This scheme, called the ζ-prescription, allows to minimize the impact of perturbative logarithms in a large range of scales and does not generate undesired power corrections. Within ζ-prescription we consistently include the perturbatively calculable parts up to next-to-next-to-leading order (NNLO), and perform the global fit of the Drell-Yan and Z-boson production, which include the data of E288, Tevatron and LHC experiments. The non-perturbative part of the TMDs are explored checking a variety of models. We support the obtained results by a study of theoretical uncertainties, perturbative convergence, and a dedicated study of the range of applicability of the TMD factorization theorem. The considered non-perturbative models present significant differences in the fitting behavior, which allow us to clearly disfavor most of them. The numerical evaluations are provided by the arTeMiDe code, which is introduced in this work and that can be used for current/future TMD phenomenology. (orig.)
Structural and functional analysis of coral Hypoxia Inducible Factor.
Zoccola, Didier; Morain, Jonas; Pagès, Gilles; Caminiti-Segonds, Natacha; Giuliano, Sandy; Tambutté, Sylvie; Allemand, Denis
2017-01-01
Tissues of symbiotic Cnidarians are exposed to wide, rapid and daily variations of oxygen concentration. Indeed, during daytime, intracellular O2 concentration increases due to symbiont photosynthesis, while during night, respiration of both host cells and symbionts leads to intra-tissue hypoxia. The Hypoxia Inducible Factor 1 (HIF-1) is a heterodimeric transcription factor used for maintenance of oxygen homeostasis and adaptation to hypoxia. Here, we carried out a mechanistic study of the response to variations of O2 concentrations of the coral model Stylophora pistillata. In silico analysis showed that homologs of HIF-1 α (SpiHIF-1α) and HIF-1β (SpiHIF-1β) exist in coral. A specific SpiHIF-1 DNA binding on mammalian Hypoxia Response Element (HRE) sequences was shown in extracts from coral exposed to dark conditions. Then, we cloned the coral HIF-1α and β genes and determined their expression and transcriptional activity. Although HIF-1α has an incomplete Oxygen-dependent Degradation Domain (ODD) relative to its human homolog, its protein level is increased under hypoxia when tested in mammalian cells. Moreover, co-transfection of SpiHIF-1α and β in mammalian cells stimulated an artificial promoter containing HRE only in hypoxic conditions. This study shows the strong conservation of molecular mechanisms involved in adaptation to O2 concentration between Cnidarians and Mammals whose ancestors diverged about 1,200-1,500 million years ago.
Structural and functional analysis of coral Hypoxia Inducible Factor.
Directory of Open Access Journals (Sweden)
Didier Zoccola
Full Text Available Tissues of symbiotic Cnidarians are exposed to wide, rapid and daily variations of oxygen concentration. Indeed, during daytime, intracellular O2 concentration increases due to symbiont photosynthesis, while during night, respiration of both host cells and symbionts leads to intra-tissue hypoxia. The Hypoxia Inducible Factor 1 (HIF-1 is a heterodimeric transcription factor used for maintenance of oxygen homeostasis and adaptation to hypoxia. Here, we carried out a mechanistic study of the response to variations of O2 concentrations of the coral model Stylophora pistillata. In silico analysis showed that homologs of HIF-1 α (SpiHIF-1α and HIF-1β (SpiHIF-1β exist in coral. A specific SpiHIF-1 DNA binding on mammalian Hypoxia Response Element (HRE sequences was shown in extracts from coral exposed to dark conditions. Then, we cloned the coral HIF-1α and β genes and determined their expression and transcriptional activity. Although HIF-1α has an incomplete Oxygen-dependent Degradation Domain (ODD relative to its human homolog, its protein level is increased under hypoxia when tested in mammalian cells. Moreover, co-transfection of SpiHIF-1α and β in mammalian cells stimulated an artificial promoter containing HRE only in hypoxic conditions. This study shows the strong conservation of molecular mechanisms involved in adaptation to O2 concentration between Cnidarians and Mammals whose ancestors diverged about 1,200-1,500 million years ago.
International Nuclear Information System (INIS)
Chen Chengzhong; Lin Zhenshan
2008-01-01
Scientific analysis of energy consumption and its influencing factors is of great importance for energy strategy and policy planning. The energy consumption in China 1953-2006 is estimated by applying the energy ecological footprint (EEF) method, and the fluctuation periods of annual China's per capita EEF (EEF cpc ) growth rate are analyzed with the empirical mode decomposition (EMD) method in this paper. EEF intensity is analyzed to depict energy efficiency in China. The main timescales of the 37 factors that affect the annual growth rate of EEF cpc are also discussed based on EMD and factor analysis methods. Results show three obvious undulation cycles of the annual growth rate of EEF cpc , i.e., 4.6, 14.4 and 34.2 years over the last 53 years. The analysis findings from the common synthesized factors of IMF1, IMF2 and IMF3 timescales of the 37 factors suggest that China's energy policy-makers should attach more importance to stabilizing economic growth, optimizing industrial structure, regulating domestic petroleum exploitation and improving transportation efficiency
Ye, Yusen; Gao, Lin; Zhang, Shihua
2017-01-01
Transcription factors play a key role in transcriptional regulation of genes and determination of cellular identity through combinatorial interactions. However, current studies about combinatorial regulation is deficient due to lack of experimental data in the same cellular environment and extensive existence of data noise. Here, we adopt a Bayesian CANDECOMP/PARAFAC (CP) factorization approach (BCPF) to integrate multiple datasets in a network paradigm for determining precise TF interaction landscapes. In our first application, we apply BCPF to integrate three networks built based on diverse datasets of multiple cell lines from ENCODE respectively to predict a global and precise TF interaction network. This network gives 38 novel TF interactions with distinct biological functions. In our second application, we apply BCPF to seven types of cell type TF regulatory networks and predict seven cell lineage TF interaction networks, respectively. By further exploring the dynamics and modularity of them, we find cell lineage-specific hub TFs participate in cell type or lineage-specific regulation by interacting with non-specific TFs. Furthermore, we illustrate the biological function of hub TFs by taking those of cancer lineage and blood lineage as examples. Taken together, our integrative analysis can reveal more precise and extensive description about human TF combinatorial interactions. PMID:29033978
Analysis of Factors Associated With Rhytidectomy Malpractice Litigation Cases.
Kandinov, Aron; Mutchnick, Sean; Nangia, Vaibhuv; Svider, Peter F; Zuliani, Giancarlo F; Shkoukani, Mahdi A; Carron, Michael A
2017-07-01
This study investigates the financial burden of medical malpractice litigation associated with rhytidectomies, as well as factors that contribute to litigation and poor defendant outcomes, which can help guide physician practices. To comprehensively evaluate rhytidectomy malpractice litigation. Jury verdict and settlement reports related to rhytidectomy malpractice litigations were obtained using the Westlaw Next database. Use of medical malpractice in conjunction with several terms for rhytidectomy, to account for the various procedure names associated with the procedure, yielded 155 court cases. Duplicate and nonrelevant cases were removed, and 89 cases were included in the analysis and reviewed for outcomes, defendant specialty, payments, and other allegations raised in proceedings. Data were collected from November 21, 2015, to December 25, 2015. Data analysis took place from December 25, 2015, to January 20, 2016. A total of 89 cases met our inclusion criteria. Most plaintiffs were female (81 of 88 with known sex [92%]), and patient age ranged from 40 to 76 years (median age, 56 years). Fifty-three (60%) were resolved in the defendant's favor, while the remaining 36 cases (40%) were resolved with either a settlement or a plaintiff verdict payment. The mean payment was $1.4 million. A greater proportion of cases involving plastic surgeon defendants were resolved with payment compared with cases involving defendants with ear, nose, and throat specialty (15 [36%] vs 4 [24%]). The most common allegations raised in litigation were intraoperative negligence (61 [69%]), poor cosmesis or disfigurement (57 [64%]), inadequate informed consent (30 [34%]), additional procedures required (14 [16%]), postoperative negligence (12 [14%]), and facial nerve injury (10 [11%]). Six cases (7%) involved alleged negligence surrounding a "lifestyle-lift" procedure, which tightens or oversews the superficial muscular aponeurosis system layer. In this study, although most cases of
Modeling the microstructural evolution during constrained sintering
DEFF Research Database (Denmark)
Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.
A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...
Weightlifter Lumbar Physiology Health Influence Factor Analysis of Sports Medicine.
Zhang, Xiangyang
2015-01-01
Chinese women's weightlifting project has been in the advanced world level, suggests that the Chinese coaches and athletes have many successful experience in the weight lifting training. Little weight lifting belongs to high-risk sports, however, to the lumbar spine injury, some young good athletes often due to lumbar trauma had to retire, and the national investment and athletes toil is regret things. This article from the perspective of sports medicine, weightlifting athletes training situation analysis and put forward Suggestions, aimed at avoiding lumbar injury, guarantee the health of athletes. In this paper, first of all to 50 professional women's weightlifting athletes doing investigation, found that 82% of the athletes suffer from lumbar disease symptoms, the reason is mainly composed of lumbar strain, intensity is too large, motion error caused by three factors. From the Angle of sports medicine and combined with the characteristics of the structure of human body skeleton athletes lumbar structural mechanics analysis, find out the lumbar force's two biggest technical movement, study, and regulate the action standard, so as to minimize lumbar force, for athletes to contribute to the health of the lumbar spine.
Efficiency limit factor analysis for the Francis-99 hydraulic turbine
Zeng, Y.; Zhang, L. X.; Guo, J. P.; Guo, Y. K.; Pan, Q. L.; Qian, J.
2017-01-01
The energy loss in hydraulic turbine is the most direct factor that affects the efficiency of the hydraulic turbine. Based on the analysis theory of inner energy loss of hydraulic turbine, combining the measurement data of the Francis-99, this paper calculates characteristic parameters of inner energy loss of the hydraulic turbine, and establishes the calculation model of the hydraulic turbine power. Taken the start-up test conditions given by Francis-99 as case, characteristics of the inner energy of the hydraulic turbine in transient and transformation law are researched. Further, analyzing mechanical friction in hydraulic turbine, we think that main ingredients of mechanical friction loss is the rotation friction loss between rotating runner and water body, and defined as the inner mechanical friction loss. The calculation method of the inner mechanical friction loss is given roughly. Our purpose is that explore and research the method and way increasing transformation efficiency of water flow by means of analysis energy losses in hydraulic turbine.
Periodic tests: a human factors analysis of documentary aspects
International Nuclear Information System (INIS)
Perinet, Romuald; Rousseau, Jean-Marie
2007-01-01
conclusions of this analysis are presented in this paper. The analysis carried out by the IRSN showed that the complexity of the design and implementation process of periodic tests is due to the diversity of.the organizations and participants, the number and the heterogeneity of the documents, and the technical and regulatory complexity of operation. In this context, defects related to the quality of the national reference document updates and to the conditions of their delivery were at the origin of difficulties in CNPEs. These difficulties address the integration of the updates by the participants, the overall vision of the rules to be respected, and the management of the workload to deal with these tasks. The analysis showed that CNPEs made efforts to produce reliable, station-specific updates, but improvements could still be made concerning the organization, the communication and the ergonomics of the operating ranges. More generally, from a human and organizational factors point of view, such an analysis surpasses the search for responsibility for the dysfunctions and allows for a more objective explanation of the encountered difficulties (inapplicable rules, delays of delivery, etc.). It also leads to a consolidated needs analysis in order to improve the global process. On the basis of a preliminary analysis, EDF has identified a plan for improvement. EDF has decided to deal with the improvement of the process within the framework of a thorough study. (authors)
Directory of Open Access Journals (Sweden)
Bruce Weaver
2014-09-01
Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.
International Nuclear Information System (INIS)
Samper, J.; Carrera, J.; Bajos, C.; Astudillo, J.; Santiago, J.L.
1999-01-01
Hydrogeochemical activities have been a key factor for the verification and constraining of the groundwater flow model developed for the safety assessment of the FUA Uranium mill tailings restoration and the Cabril L/ILW disposal facility. The lesson learned in both sites will be applied to the ground water transport modelling in the current PA exercises (ENRESA 2000). The groundwater flow model in the Cabril site, represents a low permeability fractured media, and was performed using the TRANSIN code series developed by UPC-ENRESA. The hydrogeochemical data obtained from systematic yearly sampling and analysis campaigns were successfully applied to distinguish between local and regional flow and young and old groundwater. The salinity content, mainly the chlorine anion content, was the most critical hydrogeochemical data for constraining the groundwater flow model. (author)
International Nuclear Information System (INIS)
Tsartsalis, Stergios; Moulin-Sallanon, Marcelle; Dumas, Noé; Tournier, Benjamin B.; Ghezzi, Catherine; Charnay, Yves; Ginovart, Nathalie; Millet, Philippe
2014-01-01
Purpose: In vivo imaging of GABA A receptors is essential for the comprehension of psychiatric disorders in which the GABAergic system is implicated. Small animal SPECT provides a modality for in vivo imaging of the GABAergic system in rodents using [ 123 I]Iomazenil, an antagonist of the GABA A receptor. The goal of this work is to describe and evaluate different quantitative reference tissue methods that enable reliable binding potential (BP) estimations in the rat brain to be obtained. Methods: Five male Sprague–Dawley rats were used for [ 123 I]Iomazenil brain SPECT scans. Binding parameters were obtained with a one-tissue compartment model (1TC), a constrained two-tissue compartment model (2TC c ), the two-step Simplified Reference Tissue Model (SRTM2), Logan graphical analysis and analysis of delayed-activity images. In addition, we employed factor analysis (FA) to deal with noise in data. Results: BP ND obtained with SRTM2, Logan graphical analysis and delayed-activity analysis was highly correlated with BP F values obtained with 2TC c (r = 0.954 and 0.945 respectively, p c and SRTM2 in raw and FA-denoised images (r = 0.961 and 0.909 respectively, p ND values from raw images while scans of only 70 min are sufficient from FA-denoised images. These images are also associated with significantly lower standard errors of 2TC c and SRTM2 BP values. Conclusion: Reference tissue methods such as SRTM2 and Logan graphical analysis can provide equally reliable BP ND values from rat brain [ 123 I]Iomazenil SPECT. Acquisitions, however, can be much less time-consuming either with analysis of delayed activity obtained from a 20-minute scan 50 min after tracer injection or with FA-denoising of images
Benligiray, Serdar; Onay, Ahmet
2017-01-01
The objective of this study is to explore business courses performance factors with a focus on accounting and finance. Course score interrelations are assumed to represent interpretable constructs of these factors. Factor analysis is proposed to identify the constructs that explain the correlations. Factor analysis results identify three…
Factor Analysis of Drawings: Application to College Student Models of the Greenhouse Effect
Libarkin, Julie C.; Thomas, Stephen R.; Ording, Gabriel
2015-01-01
Exploratory factor analysis was used to identify models underlying drawings of the greenhouse effect made by over 200 entering university freshmen. Initial content analysis allowed deconstruction of drawings into salient features, with grouping of these features via factor analysis. A resulting 4-factor solution explains 62% of the data variance,…
Asymptotic Likelihood Distribution for Correlated & Constrained Systems
Agarwal, Ujjwal
2016-01-01
It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.
Constrained bidirectional propagation and stroke segmentation
Energy Technology Data Exchange (ETDEWEB)
Mori, S; Gillespie, W; Suen, C Y
1983-03-01
A new method for decomposing a complex figure into its constituent strokes is described. This method, based on constrained bidirectional propagation, is suitable for parallel processing. Examples of its application to the segmentation of Chinese characters are presented. 9 references.
Mathematical Modeling of Constrained Hamiltonian Systems
Schaft, A.J. van der; Maschke, B.M.
1995-01-01
Network modelling of unconstrained energy conserving physical systems leads to an intrinsic generalized Hamiltonian formulation of the dynamics. Constrained energy conserving physical systems are directly modelled as implicit Hamiltonian systems with regard to a generalized Dirac structure on the
Regional Responses to Constrained Water Availability
Cui, Y.; Calvin, K. V.; Hejazi, M. I.; Clarke, L.; Kim, S. H.; Patel, P.
2017-12-01
There have been many concerns about water as a constraint to agricultural production, electricity generation, and many other human activities in the coming decades. Nevertheless, how different countries/economies would respond to such constraints has not been explored. Here, we examine the responding mechanism of binding water availability constraints at the water basin level and across a wide range of socioeconomic, climate and energy technology scenarios. Specifically, we look at the change in water withdrawals between energy, land-use and other sectors within an integrated framework, by using the Global Change Assessment Model (GCAM) that also endogenizes water use and allocation decisions based on costs. We find that, when water is taken into account as part of the production decision-making, countries/basins in general fall into three different categories, depending on the change of water withdrawals and water re-allocation between sectors. First, water is not a constraining factor for most of the basins. Second, advancements in water-saving technologies of the electricity generation cooling systems are sufficient of reducing water withdrawals to meet binding water availability constraints, such as in China and the EU-15. Third, water-saving in the electricity sector alone is not sufficient and thus cannot make up the lowered water availability from the binding case; for example, many basins in Pakistan, Middle East and India have to largely reduce irrigated water withdrawals by either switching to rain-fed agriculture or reducing production. The dominant responding strategy for individual countries/basins is quite robust across the range of alternate scenarios that we test. The relative size of water withdrawals between energy and agriculture sectors is one of the most important factors that affect the dominant mechanism.
Factors influencing crime rates: an econometric analysis approach
Bothos, John M. A.; Thomopoulos, Stelios C. A.
2016-05-01
The scope of the present study is to research the dynamics that determine the commission of crimes in the US society. Our study is part of a model we are developing to understand urban crime dynamics and to enhance citizens' "perception of security" in large urban environments. The main targets of our research are to highlight dependence of crime rates on certain social and economic factors and basic elements of state anticrime policies. In conducting our research, we use as guides previous relevant studies on crime dependence, that have been performed with similar quantitative analyses in mind, regarding the dependence of crime on certain social and economic factors using statistics and econometric modelling. Our first approach consists of conceptual state space dynamic cross-sectional econometric models that incorporate a feedback loop that describes crime as a feedback process. In order to define dynamically the model variables, we use statistical analysis on crime records and on records about social and economic conditions and policing characteristics (like police force and policing results - crime arrests), to determine their influence as independent variables on crime, as the dependent variable of our model. The econometric models we apply in this first approach are an exponential log linear model and a logit model. In a second approach, we try to study the evolvement of violent crime through time in the US, independently as an autonomous social phenomenon, using autoregressive and moving average time-series econometric models. Our findings show that there are certain social and economic characteristics that affect the formation of crime rates in the US, either positively or negatively. Furthermore, the results of our time-series econometric modelling show that violent crime, viewed solely and independently as a social phenomenon, correlates with previous years crime rates and depends on the social and economic environment's conditions during previous years.
Comprehensive Behavioral Analysis of Activating Transcription Factor 5-Deficient Mice
Directory of Open Access Journals (Sweden)
Mariko Umemura
2017-07-01
Full Text Available Activating transcription factor 5 (ATF5 is a member of the CREB/ATF family of basic leucine zipper transcription factors. We previously reported that ATF5-deficient (ATF5-/- mice demonstrated abnormal olfactory bulb development due to impaired interneuron supply. Furthermore, ATF5-/- mice were less aggressive than ATF5+/+ mice. Although ATF5 is widely expressed in the brain, and involved in the regulation of proliferation and development of neurons, the physiological role of ATF5 in the higher brain remains unknown. Our objective was to investigate the physiological role of ATF5 in the higher brain. We performed a comprehensive behavioral analysis using ATF5-/- mice and wild type littermates. ATF5-/- mice exhibited abnormal locomotor activity in the open field test. They also exhibited abnormal anxiety-like behavior in the light/dark transition test and open field test. Furthermore, ATF5-/- mice displayed reduced social interaction in the Crawley’s social interaction test and increased pain sensitivity in the hot plate test compared with wild type. Finally, behavioral flexibility was reduced in the T-maze test in ATF5-/- mice compared with wild type. In addition, we demonstrated that ATF5-/- mice display disturbances of monoamine neurotransmitter levels in several brain regions. These results indicate that ATF5 deficiency elicits abnormal behaviors and the disturbance of monoamine neurotransmitter levels in the brain. The behavioral abnormalities of ATF5-/- mice may be due to the disturbance of monoamine levels. Taken together, these findings suggest that ATF5-/- mice may be a unique animal model of some psychiatric disorders.
Explaining evolution via constrained persistent perfect phylogeny
2014-01-01
Background The perfect phylogeny is an often used model in phylogenetics since it provides an efficient basic procedure for representing the evolution of genomic binary characters in several frameworks, such as for example in haplotype inference. The model, which is conceptually the simplest, is based on the infinite sites assumption, that is no character can mutate more than once in the whole tree. A main open problem regarding the model is finding generalizations that retain the computational tractability of the original model but are more flexible in modeling biological data when the infinite site assumption is violated because of e.g. back mutations. A special case of back mutations that has been considered in the study of the evolution of protein domains (where a domain is acquired and then lost) is persistency, that is the fact that a character is allowed to return back to the ancestral state. In this model characters can be gained and lost at most once. In this paper we consider the computational problem of explaining binary data by the Persistent Perfect Phylogeny model (referred as PPP) and for this purpose we investigate the problem of reconstructing an evolution where some constraints are imposed on the paths of the tree. Results We define a natural generalization of the PPP problem obtained by requiring that for some pairs (character, species), neither the species nor any of its ancestors can have the character. In other words, some characters cannot be persistent for some species. This new problem is called Constrained PPP (CPPP). Based on a graph formulation of the CPPP problem, we are able to provide a polynomial time solution for the CPPP problem for matrices whose conflict graph has no edges. Using this result, we develop a parameterized algorithm for solving the CPPP problem where the parameter is the number of characters. Conclusions A preliminary experimental analysis shows that the constrained persistent perfect phylogeny model allows to
On the origin of constrained superfields
Energy Technology Data Exchange (ETDEWEB)
Dall’Agata, G. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Dudas, E. [Centre de Physique Théorique, École Polytechnique, CNRS, Université Paris-Saclay,F-91128 Palaiseau (France); Farakos, F. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)
2016-05-06
In this work we analyze constrained superfields in supersymmetry and supergravity. We propose a constraint that, in combination with the constrained goldstino multiplet, consistently removes any selected component from a generic superfield. We also describe its origin, providing the operators whose equations of motion lead to the decoupling of such components. We illustrate our proposal by means of various examples and show how known constraints can be reproduced by our method.
Common Factor Analysis Versus Principal Component Analysis: Choice for Symptom Cluster Research
Directory of Open Access Journals (Sweden)
Hee-Ju Kim, PhD, RN
2008-03-01
Conclusion: If the study purpose is to explain correlations among variables and to examine the structure of the data (this is usual for most cases in symptom cluster research, CFA provides a more accurate result. If the purpose of a study is to summarize data with a smaller number of variables, PCA is the choice. PCA can also be used as an initial step in CFA because it provides information regarding the maximum number and nature of factors. In using factor analysis for symptom cluster research, several issues need to be considered, including subjectivity of solution, sample size, symptom selection, and level of measure.
Ghan, Steven; Wang, Minghuai; Zhang, Shipeng; Ferrachat, Sylvaine; Gettelman, Andrew; Griesfeller, Jan; Kipling, Zak; Lohmann, Ulrike; Morrison, Hugh; Neubauer, David; Partridge, Daniel G; Stier, Philip; Takemura, Toshihiko; Wang, Hailong; Zhang, Kai
2016-05-24
A large number of processes are involved in the chain from emissions of aerosol precursor gases and primary particles to impacts on cloud radiative forcing. Those processes are manifest in a number of relationships that can be expressed as factors dlnX/dlnY driving aerosol effects on cloud radiative forcing. These factors include the relationships between cloud condensation nuclei (CCN) concentration and emissions, droplet number and CCN concentration, cloud fraction and droplet number, cloud optical depth and droplet number, and cloud radiative forcing and cloud optical depth. The relationship between cloud optical depth and droplet number can be further decomposed into the sum of two terms involving the relationship of droplet effective radius and cloud liquid water path with droplet number. These relationships can be constrained using observations of recent spatial and temporal variability of these quantities. However, we are most interested in the radiative forcing since the preindustrial era. Because few relevant measurements are available from that era, relationships from recent variability have been assumed to be applicable to the preindustrial to present-day change. Our analysis of Aerosol Comparisons between Observations and Models (AeroCom) model simulations suggests that estimates of relationships from recent variability are poor constraints on relationships from anthropogenic change for some terms, with even the sign of some relationships differing in many regions. Proxies connecting recent spatial/temporal variability to anthropogenic change, or sustained measurements in regions where emissions have changed, are needed to constrain estimates of anthropogenic aerosol impacts on cloud radiative forcing.
Modeling constrained sintering of bi-layered tubular structures
DEFF Research Database (Denmark)
Tadesse Molla, Tesfaye; Kothanda Ramachandran, Dhavanesan; Ni, De Wei
2015-01-01
Constrained sintering of tubular bi-layered structures is being used in the development of various technologies. Densification mismatch between the layers making the tubular bi-layer can generate stresses, which may create processing defects. An analytical model is presented to describe the densi...... and thermo-mechanical analysis. Results from the analytical model are found to agree well with finite element simulations as well as measurements from sintering experiment....
International Nuclear Information System (INIS)
Takagawa, Kenichi; Iida, Hiroyasu
2011-01-01
Imperfect maintenance planning was frequently identified in domestic nuclear power plants. To prevent such an event, we analyzed causal factors in maintenance planning stages and showed the directionality of countermeasures in this study. There is a pragmatic limit in finding the causal factors from the items based on report descriptions. Therefore, the idea of the systemic accident model, which is used to monitor the performance variability in normal circumstances, is taken as a new concept instead of investigating negative factors. As an actual method for analyzing usual activities, cognitive task analysis (CTA) was applied. Persons who experienced various maintenance activities at one electric power company were interviewed about sources related to decision making during maintenance planning, and then usual factors affecting planning were extracted as performance variability factors. The tendency of domestic events was analyzed using the classification item of those factors, and the directionality of countermeasures was shown. The following are critical for preventing imperfect maintenance planning: the persons in charge should fully understand the situation of the equipment for which they are responsible in the work planning and maintenance evaluation stages, and they should definitely understand, for example, the maintenance bases of that equipment. (author)
Mittag, Kathleen Cage
Most researchers using factor analysis extract factors from a matrix of Pearson product-moment correlation coefficients. A method is presented for extracting factors in a non-parametric way, by extracting factors from a matrix of Spearman rho (rank correlation) coefficients. It is possible to factor analyze a matrix of association such that…
Safety analysis factors for environmental restoration and decontamination and decommissioning
International Nuclear Information System (INIS)
Ellingson, D.R.
1993-04-01
Environmental restoration (ER) and facility decontamination/decommissioning (D ampersand D) operations can be grouped into two general categories. ''Nonstationary cleanup'' or simply ''cleanup'' activities are where the operation must relocate to the site of new contaminated material at the completion of each task (i.e., the operation moves to the contaminated material). ''Stationary production'' or simply ''production'' activities are where the contaminated material is moved to a centralized location (i.e., the contaminated material is moved to the operation) for analysis, sorting, treatment, storage, and disposal. This paper addresses the issue of nonstationary cleanup design. The following are the specific assigned action items: Collect and compile a list of special safety-related ER/D ampersand D design factors, especially ones that don't follow DOE Order 6430.1A requirements. Develop proposal of what makes sense to recommend to designers; especially consider recommendations for short-term projects. Present proposal at the January meeting. To achieve the action items, applicable US Department of Energy (DOE) design requirements, and cleanup operations and differences from production activities are reviewed and summarized; basic safety requirements influencing design are summarized; and finally, approaches, considerations, and methods for safe, cost-effective design of cleanup activities are discussed
A Sensitivity Analysis Approach to Identify Key Environmental Performance Factors
Directory of Open Access Journals (Sweden)
Xi Yu
2014-01-01
Full Text Available Life cycle assessment (LCA is widely used in design phase to reduce the product’s environmental impacts through the whole product life cycle (PLC during the last two decades. The traditional LCA is restricted to assessing the environmental impacts of a product and the results cannot reflect the effects of changes within the life cycle. In order to improve the quality of ecodesign, it is a growing need to develop an approach which can reflect the changes between the design parameters and product’s environmental impacts. A sensitivity analysis approach based on LCA and ecodesign is proposed in this paper. The key environmental performance factors which have significant influence on the products’ environmental impacts can be identified by analyzing the relationship between environmental impacts and the design parameters. Users without much environmental knowledge can use this approach to determine which design parameter should be first considered when (redesigning a product. A printed circuit board (PCB case study is conducted; eight design parameters are chosen to be analyzed by our approach. The result shows that the carbon dioxide emission during the PCB manufacture is highly sensitive to the area of PCB panel.
Factor Analysis for Finding Invariant Neural Descriptors of Human Emotions
Directory of Open Access Journals (Sweden)
Vitor Pereira
2018-01-01
Full Text Available A major challenge in decoding human emotions from electroencephalogram (EEG data is finding representations that are invariant to inter- and intrasubject differences. Most of the previous studies are focused in building an individual discrimination model for every subject (subject dependent model. Building subject-independent models is a harder problem due to the high data variability between different subjects and different experiments with the same subject. This paper explores, for the first time, the Factor Analysis as an efficient technique to extract temporal and spatial EEG features suitable to build brain-computer interface for decoding human emotions across various subjects. Our findings show that early waves (temporal window of 200–400 ms after the stimulus onset carry more information about the valence of the emotion. Also, spatial location of features, with a stronger impact on the emotional valence, occurs in the parietal and occipital regions of the brain. All discrimination models (NN, SVM, kNN, and RF demonstrate better discrimination rate of the positive valence. These results match closely experimental psychology hypothesis that, during early periods after the stimulus presentation, the brain response—to images with highly positive valence—is stronger.
Application of factor analysis to the explosive detection
International Nuclear Information System (INIS)
Park, Yong Joon; Song, Byung Chul; Im, Hee Jung; Kim, Won Ho; Cho, Jung Hwan
2005-01-01
The detection of explosive devices hidden in airline baggage is significant problem, particularly in view of the development of modern plastic explosives which can formed into various innocent-appearing shapes and which are sufficiently powerful that small quantities can destroy an aircraft in flight. Besides, the biggest difficulty occurs from long detection time required for the explosive detection system based on thermal neutron interrogation, which involves exposing baggage to slow neutrons having energy in the order of 0.025 eV. The elemental compositions of explosives can be determined by the Neutron Induced Prompt gamma Spectroscopy (NIPS) which has been installed in Korea Atomic Energy Research Institute as a tool for the detection of explosives in passenger baggage. In this work, the factor analysis has been applied to the NIPS system to increase the signal-to-noise ratio of the prompt gamma spectrum for the detection of explosive hidden in a passenger's baggage, especially for the noisy prompt gamma spectrum obtained with short measurement time
Factor Analysis of the Coopersmith Self-Esteem Inventory
Güloğlu, Berna; Aydın, Gül
2001-01-01
This study investigated the factor structure of the Turkish version of the Coopersmith Self-Esteem Inventory. The results showed that the inventory had a 21-factor highly complex factor structure. However of the empirically found 21 factors only 10 seemed theoretically meaningful. The results were discussed in comparison to the fndings obtained from the studies that were carried out with the original version of the Coopersmith Self-esteem Inventory.
Zhou, Yan; Wang, Pei; Wang, Xianlong; Zhu, Ji; Song, Peter X-K
2017-01-01
The multivariate regression model is a useful tool to explore complex associations between two kinds of molecular markers, which enables the understanding of the biological pathways underlying disease etiology. For a set of correlated response variables, accounting for such dependency can increase statistical power. Motivated by integrative genomic data analyses, we propose a new methodology-sparse multivariate factor analysis regression model (smFARM), in which correlations of response variables are assumed to follow a factor analysis model with latent factors. This proposed method not only allows us to address the challenge that the number of association parameters is larger than the sample size, but also to adjust for unobserved genetic and/or nongenetic factors that potentially conceal the underlying response-predictor associations. The proposed smFARM is implemented by the EM algorithm and the blockwise coordinate descent algorithm. The proposed methodology is evaluated and compared to the existing methods through extensive simulation studies. Our results show that accounting for latent factors through the proposed smFARM can improve sensitivity of signal detection and accuracy of sparse association map estimation. We illustrate smFARM by two integrative genomics analysis examples, a breast cancer dataset, and an ovarian cancer dataset, to assess the relationship between DNA copy numbers and gene expression arrays to understand genetic regulatory patterns relevant to the disease. We identify two trans-hub regions: one in cytoband 17q12 whose amplification influences the RNA expression levels of important breast cancer genes, and the other in cytoband 9q21.32-33, which is associated with chemoresistance in ovarian cancer. © 2016 WILEY PERIODICALS, INC.
Dynamic Factor Analysis of Nonstationary Multivariate Time Series.
Molenaar, Peter C. M.; And Others
1992-01-01
The dynamic factor model proposed by P. C. Molenaar (1985) is exhibited, and a dynamic nonstationary factor model (DNFM) is constructed with latent factor series that have time-varying mean functions. The use of a DNFM is illustrated using data from a television viewing habits study. (SLD)
Gabel, Charles P.; Cuesta-Vargas, Antonio I.; Barr, Sebastian; Winkeljohn Black, Stephanie; Osborne, Jason W.; Melloh, Markus
2016-01-01
Purpose The neck disability index (NDI) as a 10-item patient reported outcome (PRO) measure is the most commonly used whiplash associated disorders (WAD) assessment tool. However, statistical rigor and factor structure are not definitive. To date, confirmatory factor analysis (CFA) has not examined whether the factor structure generalizes across different groups (e.g., WAD versus non-WAD). This study aimed to determine the psychometric properties of the NDI in these population groups.
Directory of Open Access Journals (Sweden)
Marimuthu SELVAKUMAR
2015-11-01
Full Text Available This paper tries to make an attempt to study the relationship between the financial consumer protection and customer satisfaction by using factor analysis and discriminant analysis. The main objectives of the study are to analyze the financial consumer protection in commercial banks, to examine the customer satisfaction of commercial banks and to identify the factors of financial consumer protection lead customer satisfaction. There are many research work carried out on financial consumer protection in financial literacy, but the identification of factors which lead the financial consumer protection and the relationship between financial consumer protection and the customer satisfaction is very important, Particularly for banks to improve its quality and increase the customer satisfaction. Therefore this study is carried out with the aim of identifying the factors of financial consumer protection and its influence on customer satisfaction. This study is both descriptive and analytical in nature. It covers both primary and secondary data. The primary data has been collected from the customers of commercial banks using pre-tested interview schedule and the secondary data has been collected from standard books, journals, magazines, websites and so on.
Risk analysis-based food safety policy: scientific factors versus socio-cultural factors
Rosa, P.; Knapen, van F.; Brom, F.W.A.
2008-01-01
The purpose of this article is to illustrate the importance of socio-cultural factors in risk management and the need to incorporate these factors in a standard, internationally recognized (wto) framework. This was achieved by analysing the relevance of these factors in 3 cases
The purpose of
Network based transcription factor analysis of regenerating axolotl limbs
Directory of Open Access Journals (Sweden)
Cameron Jo Ann
2011-03-01
Full Text Available Abstract Background Studies on amphibian limb regeneration began in the early 1700's but we still do not completely understand the cellular and molecular events of this unique process. Understanding a complex biological process such as limb regeneration is more complicated than the knowledge of the individual genes or proteins involved. Here we followed a systems biology approach in an effort to construct the networks and pathways of protein interactions involved in formation of the accumulation blastema in regenerating axolotl limbs. Results We used the human orthologs of proteins previously identified by our research team as bait to identify the transcription factor (TF pathways and networks that regulate blastema formation in amputated axolotl limbs. The five most connected factors, c-Myc, SP1, HNF4A, ESR1 and p53 regulate ~50% of the proteins in our data. Among these, c-Myc and SP1 regulate 36.2% of the proteins. c-Myc was the most highly connected TF (71 targets. Network analysis showed that TGF-β1 and fibronectin (FN lead to the activation of these TFs. We found that other TFs known to be involved in epigenetic reprogramming, such as Klf4, Oct4, and Lin28 are also connected to c-Myc and SP1. Conclusions Our study provides a systems biology approach to how different molecular entities inter-connect with each other during the formation of an accumulation blastema in regenerating axolotl limbs. This approach provides an in silico methodology to identify proteins that are not detected by experimental methods such as proteomics but are potentially important to blastema formation. We found that the TFs, c-Myc and SP1 and their target genes could potentially play a central role in limb regeneration. Systems biology has the potential to map out numerous other pathways that are crucial to blastema formation in regeneration-competent limbs, to compare these to the pathways that characterize regeneration-deficient limbs and finally, to identify stem
An algorithm for mass matrix calculation of internally constrained molecular geometries.
Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz
2008-01-28
Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model.
An algorithm for mass matrix calculation of internally constrained molecular geometries
International Nuclear Information System (INIS)
Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz
2008-01-01
Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model
Constraining nuclear PDFs with CMS
Chapon, Emilien
2017-01-01
Nuclear parton distribution functions are essential to the understanding of proton-lead collisions. We will review several measurements from CMS that are particularly sensitive to nPDFs. W and Z bosons are medium-blind probes of the initial state of the collisions, and we will present the measurements of their production cross sections in pPb collisions at 5.02 TeV, and as well a asymmetries with an increased sensitivity to nPDFs. We will also report measurements of charmonium production, including the nuclear modification factor of J/$\\psi$ and $\\psi$(2S) in pPb collisions at 5.02 TeV, though other cold nuclear matter effects may also be at play in those processes. At last, we will present measurements of the pseudorapidity of dijets in pPb collisions at 5.02 TeV.
A hierarchical factor analysis of a safety culture survey.
Frazier, Christopher B; Ludwig, Timothy D; Whitaker, Brian; Roberts, D Steve
2013-06-01
Recent reviews of safety culture measures have revealed a host of potential factors that could make up a safety culture (Flin, Mearns, O'Connor, & Bryden, 2000; Guldenmund, 2000). However, there is still little consensus regarding what the core factors of safety culture are. The purpose of the current research was to determine the core factors, as well as the structure of those factors that make up a safety culture, and establish which factors add meaningful value by factor analyzing a widely used safety culture survey. A 92-item survey was constructed by subject matter experts and was administered to 25,574 workers across five multi-national organizations in five different industries. Exploratory and hierarchical confirmatory factor analyses were conducted revealing four second-order factors of a Safety Culture consisting of Management Concern, Personal Responsibility for Safety, Peer Support for Safety, and Safety Management Systems. Additionally, a total of 12 first-order factors were found: three on Management Concern, three on Personal Responsibility, two on Peer Support, and four on Safety Management Systems. The resulting safety culture model addresses gaps in the literature by indentifying the core constructs which make up a safety culture. This clarification of the major factors emerging in the measurement of safety cultures should impact the industry through a more accurate description, measurement, and tracking of safety cultures to reduce loss due to injury. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.
Epigenetic clock analysis of diet, exercise, education, and lifestyle factors.
Quach, Austin; Levine, Morgan E; Tanaka, Toshiko; Lu, Ake T; Chen, Brian H; Ferrucci, Luigi; Ritz, Beate; Bandinelli, Stefania; Neuhouser, Marian L; Beasley, Jeannette M; Snetselaar, Linda; Wallace, Robert B; Tsao, Philip S; Absher, Devin; Assimes, Themistocles L; Stewart, James D; Li, Yun; Hou, Lifang; Baccarelli, Andrea A; Whitsel, Eric A; Horvath, Steve
2017-02-14
Behavioral and lifestyle factors have been shown to relate to a number of health-related outcomes, yet there is a need for studies that examine their relationship to molecular aging rates. Toward this end, we use recent epigenetic biomarkers of age that have previously been shown to predict all-cause mortality, chronic conditions, and age-related functional decline. We analyze cross-sectional data from 4,173 postmenopausal female participants from the Women's Health Initiative, as well as 402 male and female participants from the Italian cohort study, Invecchiare nel Chianti.Extrinsic epigenetic age acceleration (EEAA) exhibits significant associations with fish intake (p=0.02), moderate alcohol consumption (p=0.01), education (p=3x10 -5 ), BMI (p=0.01), and blood carotenoid levels (p=1x10 -5 )-an indicator of fruit and vegetable consumption, whereas intrinsic epigenetic age acceleration (IEAA) is associated with poultry intake (p=0.03) and BMI (p=0.05). Both EEAA and IEAA were also found to relate to indicators of metabolic syndrome, which appear to mediate their associations with BMI. Metformin-the first-line medication for the treatment of type 2 diabetes-does not delay epigenetic aging in this observational study. Finally, longitudinal data suggests that an increase in BMI is associated with increase in both EEAA and IEAA.Overall, the epigenetic age analysis of blood confirms the conventional wisdom regarding the benefits of eating a high plant diet with lean meats, moderate alcohol consumption, physical activity, and education, as well as the health risks of obesity and metabolic syndrome.
Analysis Of Critical Factors Of Microfinance Institutions Of Pakistan
Directory of Open Access Journals (Sweden)
Ather Azim Khan
2010-12-01
Full Text Available This article is about the performance of Microfinance Institutions MFIs of Pakistan. In this article the types of MFIs operating is Pakistan is discussed with their details i.e. Microfinance Banks, Rural Support Programs and NGOs. Some other organizations are also involved in micro financing but their percentage is very low. It is found that Rural Support Programs RSPs are not totally involved in microfinance but have a large chunk of funds for microfinance. Micro loans are given for various purposes including starting a new business. The real theme of microcredit is to give money to a poor person to start a small or micro business and increase his family income but micro loans are often used for many other purposes such as paying another expensive loan, paying for medical expenses of bread earner of the family, marriages, construction etc. In this research work the researcher has tried to analyze the performance of MFIs of Pakistan and to find out those factors which contribute in their effectiveness. Two approaches of microfinance i.e. Institutionists Approach and Welfarists Approach are discussed. To analyze the performance of MFIs both approaches are considered i.e. Institutionists and Welfarists. Seventeen parameters are selected, many of these are financial ratios and these are divided into four groups i.e. sustainability, transparency, outreach and efficiency. Some ratios/figures of each area of these MFIs are taken in the data and analysis is performed to find out that which ones are contributing more or less. This research can be helpful for the MFIs which want to improve their performance and check their areas of significance for further improvement and development considering their approach of alleviating poverty from the society.
Human Factors Considerations in New Nuclear Power Plants: Detailed Analysis.
Energy Technology Data Exchange (ETDEWEB)
OHara,J.; Higgins, J.; Brown, W.; Fink, R.
2008-02-14
This Nuclear Regulatory Commission (NRC) sponsored study has identified human-performance issues in new and advanced nuclear power plants. To identify the issues, current industry developments and trends were evaluated in the areas of reactor technology, instrumentation and control technology, human-system integration technology, and human factors engineering (HFE) methods and tools. The issues were organized into seven high-level HFE topic areas: Role of Personnel and Automation, Staffing and Training, Normal Operations Management, Disturbance and Emergency Management, Maintenance and Change Management, Plant Design and Construction, and HFE Methods and Tools. The issues where then prioritized into four categories using a 'Phenomena Identification and Ranking Table' methodology based on evaluations provided by 14 independent subject matter experts. The subject matter experts were knowledgeable in a variety of disciplines. Vendors, utilities, research organizations and regulators all participated. Twenty issues were categorized into the top priority category. This Brookhaven National Laboratory (BNL) technical report provides the detailed methodology, issue analysis, and results. A summary of the results of this study can be found in NUREG/CR-6947. The research performed for this project has identified a large number of human-performance issues for new control stations and new nuclear power plant designs. The information gathered in this project can serve as input to the development of a long-term strategy and plan for addressing human performance in these areas through regulatory research. Addressing human-performance issues will provide the technical basis from which regulatory review guidance can be developed to meet these challenges. The availability of this review guidance will help set clear expectations for how the NRC staff will evaluate new designs, reduce regulatory uncertainty, and provide a well-defined path to new nuclear power plant
Constraining supersymmetry with precision data
International Nuclear Information System (INIS)
Pierce, D.M.; Erler, J.
1997-01-01
We discuss the results of a global fit to precision data in supersymmetric models. We consider both gravity- and gauge-mediated models. As the superpartner spectrum becomes light, the global fit to the data typically results in larger values of χ 2 . We indicate the regions of parameter space which are excluded by the data. We discuss the additional effect of the B(B→X s γ) measurement. Our analysis excludes chargino masses below M Z in the simplest gauge-mediated model with μ>0, with stronger constraints for larger values of tanβ. copyright 1997 American Institute of Physics
Enablers and constrainers to participation
DEFF Research Database (Denmark)
Desjardins, Richard; Milana, Marcella
2007-01-01
This paper briefly reviews some of evidence on participation patterns in Nordic countries and some of the defining parameters that may explain the observations. This is done in a comparative perspective by contrasting results from the 2003 Eurobarometer data between Nordic countries and a handful...... as to construct a tool for analyzing the targeting of adult learning policy, with regard to both its coverage and expected consequences. Our aim is to develop a means for a more in-depth analysis of the match-mismatch of public policy and persisting constraints to participation....
Students' motivation to study dentistry in Malaysia: an analysis using confirmatory factor analysis.
Musa, Muhd Firdaus Che; Bernabé, Eduardo; Gallagher, Jennifer E
2015-06-12
Malaysia has experienced a significant expansion of dental schools over the past decade. Research into students' motivation may inform recruitment and retention of the future dental workforce. The objectives of this study were to explore students' motivation to study dentistry and whether that motivation varied by students' and school characteristics. All 530 final-year students in 11 dental schools (6 public and 5 private) in Malaysia were invited to participate at the end of 2013. The self-administered questionnaire, developed at King's College London, collected information on students' motivation to study dentistry and demographic background. Responses on students' motivation were collected using five-point ordinal scales. Confirmatory factor analysis (CFA) was used to evaluate the underlying structure of students' motivation to study dentistry. Multivariate analysis of variance (MANOVA) was used to compare factor scores for overall motivation and sub-domains by students' and school characteristics. Three hundred and fifty-six final-year students in eight schools (all public and two private) participated in the survey, representing an 83% response rate for these schools and 67% of all final-year students nationally. The majority of participants were 24 years old (47%), female (70%), Malay (56%) and from middle-income families (41%) and public schools (78%). CFA supported a model with five first-order factors (professional job, healthcare and people, academic, careers advising and family and friends) which were linked to a single second-order factor representing overall students' motivation. Academic factors and healthcare and people had the highest standardized factor loadings (0.90 and 0.71, respectively), suggesting they were the main motivation to study dentistry. MANOVA showed that students from private schools had higher scores for healthcare and people than those in public schools whereas Malay students had lower scores for family and friends than those
[Analysis of risk factors associated with professional drivers’ work].
Czerwińska, Maja; Hołowko, Joanna; Stachowska, Ewa
Professional driver is an occupation associated with high health risk. The factors which increase the risk of developing lifestyle diseases are closely related to working conditions. The aim of this study was to analyse the risk factors which are associated with professional drivers’ lifestyle. The material consisted of 23 articles from PubMed.gov. Risk factors related to drivers’ work have a signiicant impact on their health.
Factors influencing societal response of nanotechnology : an expert stakeholder analysis
Gupta, N.; Fischer, A.R.H.; Lans, van der, I.A.; Frewer, L.J.
2012-01-01
Nanotechnology can be described as an emerging technology and, as has been the case with other emerging technologies such as genetic modification, different socio-psychological factors will potentially influence societal responses to its development and application. These factors will play an important role in how nanotechnology is developed and commercialised. This article aims to identify expert opinion on factors influencing societal response to applications of nanotechnology. Structured i...
Factors influencing societal response of nanotechnology: an expert stakeholder analysis
Gupta, Nidhi; Fischer, Arnout R. H.; van der Lans, Ivo A.; Frewer, Lynn J.
2012-01-01
Nanotechnology can be described as an emerging technology and, as has been the case with other emerging technologies such as genetic modification, different socio-psychological factors will potentially influence societal responses to its development and application. These factors will play an important role in how nanotechnology is developed and commercialised. This article aims to identify expert opinion on factors influencing societal response to applications of nanotechnology. Structured i...
Mature Basin Development Portfolio Management in a Resource Constrained Environment
International Nuclear Information System (INIS)
Mandhane, J. M.; Udo, S. D.
2002-01-01
Nigerian Petroleum industry is constantly faced with management of resource constraints stemming from capital and operating budget, availability of skilled manpower, capacity of an existing surface facility, size of well assets, amount of soft and hard information, etceteras. Constrained capital forces the industry to rank subsurface resource and potential before proceeding with preparation of development scenarios. Availability of skilled manpower limits scope of integrated reservoir studies. Level of information forces technical and management to find low-risk development alternative in a limited time. Volume of either oil or natural gas or water or combination of them may be constrained due to design limits of the existing facility, or an external OPEC quota, requires high portfolio management skills.The first part of the paper statistically analyses development portfolio of a mature basin for (a) subsurface resources volume, (b) developed and undeveloped and undeveloped volumes, (c) sweating of wells, and (d) facility assets. The analysis presented conclusively demonstrates that the 80/20 is active in the statistical sample. The 80/20 refers to 80% of the effect coming from the 20% of the cause. The second part of the paper deals with how 80/20 could be applied to manage portfolio for a given set of constraints. Three application examples are discussed. Feedback on implementation of them resulting in focussed resource management with handsome rewards is documented.The statistical analysis and application examples from a mature basin form a way forward for a development portfolio management in an resource constrained environment
Dark matter scenarios in a constrained model with Dirac gauginos
Goodsell, Mark D.; Müller, Tobias; Porod, Werner; Staub, Florian
2015-01-01
We perform the first analysis of Dark Matter scenarios in a constrained model with Dirac Gauginos. The model under investigation is the Constrained Minimal Dirac Gaugino Supersymmetric Standard model (CMDGSSM) where the Majorana mass terms of gauginos vanish. However, $R$-symmetry is broken in the Higgs sector by an explicit and/or effective $B_\\mu$-term. This causes a mass splitting between Dirac states in the fermion sector and the neutralinos, which provide the dark matter candidate, become pseudo-Dirac states. We discuss two scenarios: the universal case with all scalar masses unified at the GUT scale, and the case with non-universal Higgs soft-terms. We identify different regions in the parameter space which fullfil all constraints from the dark matter abundance, the limits from SUSY and direct dark matter searches and the Higgs mass. Most of these points can be tested with the next generation of direct dark matter detection experiments.
A configurational analysis of success factors in crowdfunding video campaigns
DEFF Research Database (Denmark)
Lomberg, Carina; Li-Ying, Jason; Alkærsig, Lars
Recent discussions on success factors on crowdfunding campaigns highlight a plentitude of diverse factors that stem from different, partly contradicting theories. We focus on campaign videos and assume more than one way of creating a successful crowdfunding video. We generate data of 1000 randomly...
The Self-Report Family Inventory: An Exploratory Factor Analysis
Goodrich, Kristopher M.; Selig, James P.; Trahan, Don P., Jr.
2012-01-01
Researchers explored the factor structure of the Self-Report Family Inventory with a sample of heterosexual parents who have a son or daughter who self-identifies as lesbian, gay, or bisexual. Results suggest that a two-factor solution is appropriate. Research and clinical implications are offered. (Contains 1 figure and 2 tables.)
Dimensions of assertiveness: factor analysis of five assertion inventories.
Henderson, M; Furnham, A
1983-09-01
Five self report assertiveness inventories were factor analyzed. In each case two major factors emerged, accounting for approximately one-quarter to a third of the variance. The findings emphasize the multidimensional nature of current measures of assertiveness, and suggest the construction of a more systematic and psychometrically evaluated scale that would yield subscale scores assessing the separate dimensions of assertiveness.
Factors influencing societal response of nanotechnology : an expert stakeholder analysis
Gupta, N.; Fischer, A.R.H.; Lans, van der I.A.; Frewer, L.J.
2012-01-01
Nanotechnology can be described as an emerging technology and, as has been the case with other emerging technologies such as genetic modification, different socio-psychological factors will potentially influence societal responses to its development and application. These factors will play an
Towards weakly constrained double field theory
Directory of Open Access Journals (Sweden)
Kanghoon Lee
2016-08-01
Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.
Continuation of Sets of Constrained Orbit Segments
DEFF Research Database (Denmark)
Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki
Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....
Groundwater availability as constrained by hydrogeology and environmental flows.
Watson, Katelyn A; Mayer, Alex S; Reeves, Howard W
2014-01-01
Groundwater pumping from aquifers in hydraulic connection with nearby streams has the potential to cause adverse impacts by decreasing flows to levels below those necessary to maintain aquatic ecosystems. The recent passage of the Great Lakes-St. Lawrence River Basin Water Resources Compact has brought attention to this issue in the Great Lakes region. In particular, the legislation requires the Great Lakes states to enact measures for limiting water withdrawals that can cause adverse ecosystem impacts. This study explores how both hydrogeologic and environmental flow limitations may constrain groundwater availability in the Great Lakes Basin. A methodology for calculating maximum allowable pumping rates is presented. Groundwater availability across the basin may be constrained by a combination of hydrogeologic yield and environmental flow limitations varying over both local and regional scales. The results are sensitive to factors such as pumping time, regional and local hydrogeology, streambed conductance, and streamflow depletion limits. Understanding how these restrictions constrain groundwater usage and which hydrogeologic characteristics and spatial variables have the most influence on potential streamflow depletions has important water resources policy and management implications. © 2013, National Ground Water Association.
International Nuclear Information System (INIS)
Nigran, K.S.; Barber, D.C.
1985-01-01
A method is proposed for automatic analysis of dynamic radionuclide studies using the mathematical technique of principal-components factor analysis. This method is considered as a possible alternative to the conventional manual regions-of-interest method widely used. The method emphasises the importance of introducing a priori information into the analysis about the physiology of at least one of the functional structures in a study. Information is added by using suitable mathematical models to describe the underlying physiological processes. A single physiological factor is extracted representing the particular dynamic structure of interest. Two spaces 'study space, S' and 'theory space, T' are defined in the formation of the concept of intersection of spaces. A one-dimensional intersection space is computed. An example from a dynamic 99 Tcsup(m) DTPA kidney study is used to demonstrate the principle inherent in the method proposed. The method requires no correction for the blood background activity, necessary when processing by the manual method. The careful isolation of the kidney by means of region of interest is not required. The method is therefore less prone to operator influence and can be automated. (author)
Confirmatory Factor Analysis of the WISC-III with Child Psychiatric Inpatients.
Tupa, David J.; Wright, Margaret O'Dougherty; Fristad, Mary A.
1997-01-01
Factor models of the Wechsler Intelligence Scale for Children-Third Edition (WISC-III) for one, two, three, and four factors were tested using confirmatory factor analysis with a sample of 177 child psychiatric inpatients. The four-factor model proposed in the WISC-III manual provided the best fit to the data. (SLD)
Smoking among American adolescents: a risk and protective factor analysis.
Scal, Peter; Ireland, Marjorie; Borowsky, Iris Wagman
2003-04-01
Cigarette smoking remains a substantial threat to the current and future health of America's youth. The purpose of this study was to identify the risk and protective factors for cigarette smoking among US adolescents. Data from the National Longitudinal Study of Adolescent Health was used, comparing the responses of all non-smokers at Time 1 for their ability to predict the likelihood of smoking at Time 2, one year later. Data was stratified into four gender by grade group cohorts. Cross-cutting risk factors for smoking among all four cohorts were: using alcohol, marijuana, and other illicit drugs; violence involvement; having had sex; having friends who smoke and learning problems. Having a higher grade point average and family connectedness were protective across all cohorts. Other gender and grade group specific risk and protective factors were identified. The estimated probability of initiating smoking decreased by 19.2% to 54.1% both in situations of high and low risk as the number of protective factors present increased. Of the factors that predict or protect against smoking some are influential across all gender and grade group cohorts studied, while others are specific to gender and developmental stage. Prevention efforts that target both the reduction of risk factors and enhancement of protective factors at the individual, family, peer group and community are likely to reduce the likelihood of smoking initiation.
Evaluation of Parallel Analysis Methods for Determining the Number of Factors
Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.
2010-01-01
Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…
Prognostic factors in canine appendicular osteosarcoma - a meta-analysis.
Boerman, Ilse; Selvarajah, Gayathri T; Nielen, Mirjam; Kirpensteijn, Jolle
2012-05-15
Appendicular osteosarcoma is the most common malignant primary canine bone tumor. When treated by amputation or tumor removal alone, median survival times (MST) do not exceed 5 months, with the majority of dogs suffering from metastatic disease. This period can be extended with adequate local intervention and adjuvant chemotherapy, which has become common practice. Several prognostic factors have been reported in many different studies, e.g. age, breed, weight, sex, neuter status, location of tumor, serum alkaline phosphatase (SALP), bone alkaline phosphatase (BALP), infection, percentage of bone length affected, histological grade or histological subtype of tumor. Most of these factors are, however, only reported as confounding factors in larger studies. Insight in truly significant prognostic factors at time of diagnosis may contribute to tailoring adjuvant therapy for individual dogs suffering from osteosarcoma. The objective of this study was to systematically review the prognostic factors that are described for canine appendicular osteosarcoma and validate their scientific importance. A literature review was performed on selected studies and eligible data were extracted. Meta-analyses were done for two of the three selected possible prognostic factors (SALP and location), looking at both survival time (ST) and disease free interval (DFI). The third factor (age) was studied in a qualitative manner. Both elevated SALP level and the (proximal) humerus as location of the primary tumor are significant negative prognostic factors for both ST and DFI in dogs with appendicular osteosarcoma. Increasing age was associated with shorter ST and DFI, however, was not statistically significant because information of this factor was available in only a limited number of papers. Elevated SALP and proximal humeral location are significant negative prognosticators for canine osteosarcoma.
Analysis of The Factors Influencing the Private Cost of Teacher ...
African Journals Online (AJOL)
info
programme of study, level of study, place of students' residence and ownership status ... factor in the process of economic growth and development of nations. ... project/assignment, teaching practice, study tow/excursion, textbook, stationary,.
Risk Factor Analysis for Oral Precancer among Slum Dwellers in ...
African Journals Online (AJOL)
risk factors for oral precancer, i.e., smoking/smokeless tobacco, chewing ... procedure was performed on a group of 10 subjects, which were ... clinical description of observed oral mucosal lesions was made ..... use and effects of cessation.
Analysis of Factors Affecting Decisions to Participate and Levels of ...
African Journals Online (AJOL)
... among Heads of Households in Minituber Yam Marketing in Abia State, Nigeria. ... in negative effects of socio economic factors on market participation as well as ... These results called for public policy for increased gender access to good ...
Analysis of corrosive environmental factors of seabed sediment
Indian Academy of Sciences (India)
Unknown
Seabed sediment; corrosion; environmental factors. 1. Introduction. The corrosion ... plays an important role in the corrosion behaviour of steel in sediment. Figure 2b shows the change in oxidation-reduction po- tential, Eh with distance from ...
Low-energy analysis of the nucleon electromagnetic form factors
International Nuclear Information System (INIS)
Kubis, Bastian.; Meissner, Ulf-G.
2001-01-01
We analyze the electromagnetic form factors of the nucleon to fourth order in relativistic baryon chiral perturbation theory. We employ the recently proposed infrared regularization scheme and show that the convergence of the chiral expansion is improved as compared to the heavy-fermion approach. We also discuss the inclusion of vector mesons and obtain an accurate description of all four-nucleon form factors for momentum transfer squared up to Q 2 ≅0.4 GeV 2
A comparative analysis of foreign direct investment factors
Miškinis, Algirdas; Juozėnaitė, Ilma
2015-01-01
The paper identifies factors affecting the foreign direct investment (FDI) inflow. It analyzes the determinants of FDI in recent empirical evidence as well as determines differences among FDI factors in Greece, Ireland, and the Netherlands. The determinants being examined are the gross domestic product (GDP) per capita, exchange rate, unit labor costs, trade openness as well as inflation. The analyzed period is 1974–2012. Data were collected from the World Bank and the Organization for Econom...
An Analysis of the Factors Impacting Employee's Specific Investment
Institute of Scientific and Technical Information of China (English)
WU Ai-hua; GE Wen-lei
2008-01-01
The amount of specific investment from employees is limited, and the reasons of the under-investment from employees are analyzed in this paper. Based on the relationship of the specific investment and the employee demission, an empirical study has been conducted focusing on the factors influencing the employee turnover and the specific investment. A theoretical model of the factors influencing employee's specific investment is given.
Quantitative risk analysis offshore-Human and organizational factors
International Nuclear Information System (INIS)
Espen Skogdalen, Jon; Vinnem, Jan Erik
2011-01-01
Quantitative Risk Analyses (QRAs) are one of the main tools for risk management within the Norwegian and UK oil and gas industry. Much criticism has been given to the limitations related to the QRA-models and that the QRAs do not include human and organizational factors (HOF-factors). Norway and UK offshore legislation and guidelines require that the HOF-factors are included in the QRAs. A study of 15 QRAs shows that the factors are to some extent included, and there are large differences between the QRAs. The QRAs are categorized into four levels according to the findings. Level 1 QRAs do not describe or comment on the HOF-factors at all. Relevant research projects have been conducted to fulfill the requirements of Level 3 analyses. At this level, there is a systematic collection of data related to HOF. The methods are systematic and documented, and the QRAs are adjusted. None of the QRAs fulfill the Level 4 requirements. Level 4 QRAs include the model and describe the HOF-factors as well as explain how the results should be followed up in the overall risk management. Safety audits by regulatory authorities are probably necessary to point out the direction for QRA and speed up the development.
BENLIGIRAY, Serdar; ONAY, Ahmet
2017-01-01
The objective of this study is to explore business courses performance factors with a focus on accounting and finance. Course score interrelations are assumed to represent interpretable constructs of these factors. Factor analysis is proposed to identify the constructs that explain the correlations. Factor analysis results identify three sub-groups of business core courses. The first group is labeled as management-oriented courses. Accounting, finance and economics courses are separated in tw...
A factor analysis to find critical success factors in retail brand
Naser Azad; Seyed Foad Zarifi; Somayeh Hozouri
2013-01-01
The present exploratory study aims to find critical components of retail brand among some retail stores. The study seeks to build a brand name in retail level and looks to find important factors affecting it. Customer behavior is largely influenced when the first retail customer experience is formed. These factors have direct impacts on customer experience and satisfaction in retail industry. The proposed study performs an empirical investigation on two well-known retain stores located in cit...
Factoring local sequence composition in motif significance analysis.
Ng, Patrick; Keich, Uri
2008-01-01
We recently introduced a biologically realistic and reliable significance analysis of the output of a popular class of motif finders. In this paper we further improve our significance analysis by incorporating local base composition information. Relying on realistic biological data simulation, as well as on FDR analysis applied to real data, we show that our method is significantly better than the increasingly popular practice of using the normal approximation to estimate the significance of a finder's output. Finally we turn to leveraging our reliable significance analysis to improve the actual motif finding task. Specifically, endowing a variant of the Gibbs Sampler with our improved significance analysis we demonstrate that de novo finders can perform better than has been perceived. Significantly, our new variant outperforms all the finders reviewed in a recently published comprehensive analysis of the Harbison genome-wide binding location data. Interestingly, many of these finders incorporate additional information such as nucleosome positioning and the significance of binding data.
Analysis of the effect of meteorological factors on dewfall
International Nuclear Information System (INIS)
Xiao, Huijie; Meissner, Ralph; Seeger, Juliane; Rupp, Holger; Borg, Heinz; Zhang, Yuqing
2013-01-01
To get an insight into when dewfall will occur and how much to expect we carried out extensive calculations with the energy balance equation for a crop surface to 1) identify the meteorological factors which determine dewfall, 2) establish the relationship between dewfall and each of them, and 3) analyse how these relationships are influenced by changes in these factors. The meteorological factors which determine dewfall were found to be air temperature (T a ), cloud cover (N), wind speed (u), soil heat flux (G), and relative humidity (h r ). Net radiation is also a relevant factor. We did not consider it explicitly, but indirectly through the effect of temperature on the night-time radiation balance. The temperature of the surface (T s ) where dew forms on is also important. However, it is not a meteorological factor, but determined by the aforementioned parameters. All other conditions being equal our study revealed that dewfall increases linearly with decreasing N or G, and with increasing h r . The effect of T a and u on dewfall is non-linear: dewfall initially increases with increasing T a or u, and then decreases. All five meteorological factors can lead to variations in dewfall between 0 and 25 W m −2 over the range of their values we studied. The magnitude of the variation due to one factor depends on the value of the others. Dewfall is highest at N = 0, G = 0, and h r = 1. T a at which dewfall is highest depends on u and vice versa. The change in dewfall for a unit change in N, G or h r is not affected by the value of N, G or h r , but increases as T a or u increase. The change in dewfall for a unit change in T a or u depends on the value of the other four meteorological factors. - Highlights: • Process of dewfall is examined for a wide range of meteorological conditions. • Effect of meteorological factors on dewfall is individually elucidated. • Interaction between factors and their combined effect on dewfall is assessed. • Extensive
A Comparative Analysis of Ability of Mimicking Portfolios in Representing the Background Factors
Asgharian, Hossein
2004-01-01
Our aim is to give a comparative analysis of ability of different factor mimicking portfolios in representing the background factors. Our analysis contains a cross-sectional regression approach, a time-series regression approach and a portfolio approach for constructing factor mimicking portfolios. The focus of the analysis is the power of mimicking portfolios in the asset pricing models. We conclude that the time series regression approach, with the book-to-market sorted portfolios as the ba...
Cuesta-Vargas, Antonio; Solera Martinez, M; Rodriguez Moya, Alejandro; Perez, Y; Martinez Vizcaino, V
2011-01-01
Purpose: To use confirmatory factor analysis to test whether a four factor might explain the clustering of the components of the physical fitness in adults with intellectual disabilities (FID). Relevance: Individuals with intellectual disabilities (ID) are significantly weaker than individuals without ID at all stages of life. These subjects might be particularly susceptible to loss of basic function because of poor physical fitness. Participants: We studied 267 adults with intellectual...
Directory of Open Access Journals (Sweden)
Zuzana Toufarová
2007-01-01
Full Text Available The paper analyses buying behaviour of Czech households on the market with footwear and cloths. It aims at factors influ, encing this behaviour, e.g. price, brand, quality, product attributes, habits, price reductions, advertisement, innovation and word-of-mauth. Primary data were obtained via survey of 727 Czech households by staff of the Department of Marketing and Trade, Mendel University of Agriculture and Forestry Brno. The paper provides results of correlation analysis and factor analysis. When making purchase decisions, households identify attributes and parameters of clothes and footwear as the most important factor. Due to factor analysis, factors were reduced into four comprehensive groups.
Krekoten, Olena M; Dereziuk, Anatolii V; Ihnaschuk, Olena V; Holovchanska, Svitlana E
Issues related to labour potential, its state and problems have consistently been a focus of attention for the International Labour Organisation (ILO). Its respective analysis shows that labour potential problems remain unresolved in many countries of the world. According to the World Health Organisation (WHO), adverse working conditions are among major factors of occupational disease development in Europe and the reason for disabilities of economically active population during 2.5% of their lifetime. The aim of the present study is to identify and analyse major risk factors, which have a bearing on people working in agriculture in the course of exercising their occupation, with account of forms of ownership of agricultural enterprises. Carried out was a cross-sectional study involving a sociological survey of 412 respondents - those working in agriculture - who made up the primary group and the control group. The study revealed 21 risk factors, 9 of which were work-related. A modified elementary cybernetic model of studying impact efficiency was developed with the view of carrying out a structural analysis of the sample group and choosing relevant methodological approaches. It has been established that harmful factors related to working environment and one's lifestyle are decisive in the agrarian sector, particularly for workers of privately owned businesses. For one out of three respondents harmful working conditions manifested themselves as industrial noise (31.7±3.4), vibration (29.0±2.1) trunk bending and constrained working posture (36.6±3.4). The vast majority of agricultural workers (91.6±2.5) admitted they could not afford proper rest during their annual leave; male respondents abused alcohol (70.6±3.0) and smoking (41.4±2.0 per 100 workers). The research established the structure of risk factors, which is sequentially represented by the following groups: behavioral (smoking, drinking of alcohol, rest during annual leave, physical culture), working
On Tree-Constrained Matchings and Generalizations
S. Canzar (Stefan); K. Elbassioni; G.W. Klau (Gunnar); J. Mestre
2011-01-01
htmlabstractWe consider the following \\textsc{Tree-Constrained Bipartite Matching} problem: Given two rooted trees $T_1=(V_1,E_1)$, $T_2=(V_2,E_2)$ and a weight function $w: V_1\\times V_2 \\mapsto \\mathbb{R}_+$, find a maximum weight matching $\\mathcal{M}$ between nodes of the two trees, such that
Constrained systems described by Nambu mechanics
International Nuclear Information System (INIS)
Lassig, C.C.; Joshi, G.C.
1996-01-01
Using the framework of Nambu's generalised mechanics, we obtain a new description of constrained Hamiltonian dynamics, involving the introduction of another degree of freedom in phase space, and the necessity of defining the action integral on a world sheet. We also discuss the problem of quantizing Nambu mechanics. (authors). 5 refs
A Dynamic Programming Approach to Constrained Portfolios
DEFF Research Database (Denmark)
Kraft, Holger; Steffensen, Mogens
2013-01-01
This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies...
A model for optimal constrained adaptive testing
van der Linden, Willem J.; Reese, Lynda M.
2001-01-01
A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum
A model for optimal constrained adaptive testing
van der Linden, Willem J.; Reese, Lynda M.
1997-01-01
A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum
Neutron Powder Diffraction and Constrained Refinement
DEFF Research Database (Denmark)
Pawley, G. S.; Mackenzie, Gordon A.; Dietrich, O. W.
1977-01-01
The first use of a new program, EDINP, is reported. This program allows the constrained refinement of molecules in a crystal structure with neutron diffraction powder data. The structures of p-C6F4Br2 and p-C6F4I2 are determined by packing considerations and then refined with EDINP. Refinement is...
Terrestrial Sagnac delay constraining modified gravity models
Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.
2018-04-01
Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.
Chance constrained uncertain classification via robust optimization
Ben-Tal, A.; Bhadra, S.; Bhattacharayya, C.; Saketha Nat, J.
2011-01-01
This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out
Integrating job scheduling and constrained network routing
DEFF Research Database (Denmark)
Gamst, Mette
2010-01-01
This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number...
Neuroevolutionary Constrained Optimization for Content Creation
DEFF Research Database (Denmark)
Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian
2011-01-01
and thruster types and topologies) independently of game physics and steering strategies. According to the proposed framework, the designer picks a set of requirements for the spaceship that a constrained optimizer attempts to satisfy. The constraint satisfaction approach followed is based on neuroevolution...... and survival tasks and are also visually appealing....
Models of Flux Tubes from Constrained Relaxation
Indian Academy of Sciences (India)
tribpo
J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...
[Analysis of influencing factors of snow hyperspectral polarized reflections].
Sun, Zhong-Qiu; Zhao, Yun-Sheng; Yan, Guo-Qian; Ning, Yan-Ling; Zhong, Gui-Xin
2010-02-01
Due to the need of snow monitoring and the impact of the global change on the snow, on the basis of the traditional research on snow, starting from the perspective of multi-angle polarized reflectance, we analyzed the influencing factors of snow from the incidence zenith angles, the detection zenith angles, the detection azimuth angles, polarized angles, the density of snow, the degree of pollution, and the background of the undersurface. It was found that these factors affected the spectral reflectance values of the snow, and the effect of some factors on the polarization hyperspectral reflectance observation is more evident than in the vertical observation. Among these influencing factors, the pollution of snow leads to an obvious change in the snow reflectance spectrum curve, while other factors have little effect on the shape of the snow reflectance spectrum curve and mainly impact the reflection ratio of the snow. Snow reflectance polarization information has not only important theoretical significance, but also wide application prospect, and provides new ideas and methods for the quantitative research on snow using the remote sensing technology.
Constraining primordial non-Gaussianity with cosmological weak lensing: shear and flexion
International Nuclear Information System (INIS)
Fedeli, C.; Bartelmann, M.; Moscardini, L.
2012-01-01
We examine the cosmological constraining power of future large-scale weak lensing surveys on the model of the ESA planned mission Euclid, with particular reference to primordial non-Gaussianity. Our analysis considers several different estimators of the projected matter power spectrum, based on both shear and flexion. We review the covariance and Fisher matrix for cosmic shear and evaluate those for cosmic flexion and for the cross-correlation between the two. The bounds provided by cosmic shear alone are looser than previously estimated, mainly due to the reduced sky coverage and background number density of sources for the latest Euclid specifications. New constraints for the local bispectrum shape, marginalized over σ 8 , are at the level of Δf NL ∼ 100, with the precise value depending on the exact multipole range that is considered in the analysis. We consider three additional bispectrum shapes, for which the cosmic shear constraints range from Δf NL ∼ 340 (equilateral shape) up to Δf NL ∼ 500 (orthogonal shape). Also, constraints on the level of non-Gaussianity and on the amplitude of the matter power spectrum σ 8 are almost perfectly anti-correlated, except for the orthogonal bispectrum shape for which they are correlated. The competitiveness of cosmic flexion constraints against cosmic shear ones depends by and large on the galaxy intrinsic flexion noise, that is still virtually unconstrained. Adopting the very high value that has been occasionally used in the literature results in the flexion contribution being basically negligible with respect to the shear one, and for realistic configurations the former does not improve significantly the constraining power of the latter. Since the shear shot noise is white, while the flexion one decreases with decreasing scale, by considering high enough multipoles the two contributions have to become comparable. Extending the analysis up to l max = 20,000 cosmic flexion, while being still subdominant
Constraining primordial non-Gaussianity with cosmological weak lensing: shear and flexion
Energy Technology Data Exchange (ETDEWEB)
Fedeli, C. [Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL 32611-2055 (United States); Bartelmann, M. [Zentrum für Astronomie, Universität Heidelberg, Albert-Überle-Straße 2, 69120 Heidelberg (Germany); Moscardini, L., E-mail: cosimo.fedeli@astro.ufl.edu, E-mail: bartelmann@uni-heidelberg.de, E-mail: lauro.moscardini@unibo.it [Dipartimento di Astronomia, Università di Bologna, Via Ranzani 1, 40127 Bologna (Italy)
2012-10-01
We examine the cosmological constraining power of future large-scale weak lensing surveys on the model of the ESA planned mission Euclid, with particular reference to primordial non-Gaussianity. Our analysis considers several different estimators of the projected matter power spectrum, based on both shear and flexion. We review the covariance and Fisher matrix for cosmic shear and evaluate those for cosmic flexion and for the cross-correlation between the two. The bounds provided by cosmic shear alone are looser than previously estimated, mainly due to the reduced sky coverage and background number density of sources for the latest Euclid specifications. New constraints for the local bispectrum shape, marginalized over σ{sub 8}, are at the level of Δf{sub NL} ∼ 100, with the precise value depending on the exact multipole range that is considered in the analysis. We consider three additional bispectrum shapes, for which the cosmic shear constraints range from Δf{sub NL} ∼ 340 (equilateral shape) up to Δf{sub NL} ∼ 500 (orthogonal shape). Also, constraints on the level of non-Gaussianity and on the amplitude of the matter power spectrum σ{sub 8} are almost perfectly anti-correlated, except for the orthogonal bispectrum shape for which they are correlated. The competitiveness of cosmic flexion constraints against cosmic shear ones depends by and large on the galaxy intrinsic flexion noise, that is still virtually unconstrained. Adopting the very high value that has been occasionally used in the literature results in the flexion contribution being basically negligible with respect to the shear one, and for realistic configurations the former does not improve significantly the constraining power of the latter. Since the shear shot noise is white, while the flexion one decreases with decreasing scale, by considering high enough multipoles the two contributions have to become comparable. Extending the analysis up to l{sub max} = 20,000 cosmic flexion, while
Sacrococcygeal pilonidal disease: analysis of previously proposed risk factors
Directory of Open Access Journals (Sweden)
Ali Harlak
2010-01-01
Full Text Available PURPOSE: Sacrococcygeal pilonidal disease is a source of one of the most common surgical problems among young adults. While male gender, obesity, occupations requiring sitting, deep natal clefts, excessive body hair, poor body hygiene and excessive sweating are described as the main risk factors for this disease, most of these need to be verified with a clinical trial. The present study aimed to evaluate the value and effect of these factors on pilonidal disease. METHOD: Previously proposed main risk factors were evaluated in a prospective case control study that included 587 patients with pilonidal disease and 2,780 healthy control patients. RESULTS: Stiffness of body hair, number of baths and time spent seated per day were the three most predictive risk factors. Adjusted odds ratios were 9.23, 6.33 and 4.03, respectively (p<0.001. With an adjusted odds ratio of 1.3 (p<.001, body mass index was another risk factor. Family history was not statistically different between the groups and there was no specific occupation associated with the disease. CONCLUSIONS: Hairy people who sit down for more than six hours a day and those who take a bath two or less times per week are at a 219-fold increased risk for sacrococcygeal pilonidal disease than those without these risk factors. For people with a great deal of hair, there is a greater need for them to clean their intergluteal sulcus. People who engage in work that requires sitting in a seat for long periods of time should choose more comfortable seats and should also try to stand whenever possible.
Bayesian linkage and segregation analysis: factoring the problem.
Matthysse, S
2000-01-01
Complex segregation analysis and linkage methods are mathematical techniques for the genetic dissection of complex diseases. They are used to delineate complex modes of familial transmission and to localize putative disease susceptibility loci to specific chromosomal locations. The computational problem of Bayesian linkage and segregation analysis is one of integration in high-dimensional spaces. In this paper, three available techniques for Bayesian linkage and segregation analysis are discussed: Markov Chain Monte Carlo (MCMC), importance sampling, and exact calculation. The contribution of each to the overall integration will be explicitly discussed.
International Nuclear Information System (INIS)
Sin, Y. C.; Jung, Y. S.; Kim, K. H.; Kim, J. H.
2008-04-01
Main control room of nuclear power plants has been computerized and digitalized in new and modernized plants, as information and digital technologies make great progresses and become mature. Survey on human factors engineering issues in advanced MCRs: Model-based approach, Literature survey-based approach. Analysis of human error types and performance shaping factors is analysis of three human errors. The results of project can be used for task analysis, evaluation of human error probabilities, and analysis of performance shaping factors in the HRA analysis
Analysis of K-factor for five-ply plywood
F.F. Wangarrd; George E. Woodson; William R. Wilcox
1973-01-01
K-factor has long been used as a modifier of section modulus in the calculation of bending moment for plywood beams. No comparable modification of moment of intertia is included in current recommendations for calculating plywood deflection at span-depth ratios of 30:1 or greater. Results of limited testing lend support to the authors' contention that, where "...
Factors Associated with Sexual Behavior among Adolescents: A Multivariate Analysis.
Harvey, S. Marie; Spigner, Clarence
1995-01-01
A self-administered survey examining multiple factors associated with engaging in sexual intercourse was completed by 1,026 high school students in a classroom setting. Findings suggest that effective interventions to address teenage pregnancy need to utilize a multifaceted approach to the prevention of high-risk behaviors. (JPS)
Analysis of factors influencing adoption of okra production ...
African Journals Online (AJOL)
Socio-economic factors influencing adoption of okra technology packages in Enugu State, Nigeria was studied and analyzed in 2012. Purposive and multistage random sampling techniques were used in selecting communities, villages and Okra farmers. The sample size was 90 okra farmers (45Awgu and 45 Aninri Okra ...
Analysis of Factors Responsible for Low Utilization of Mechanical ...
African Journals Online (AJOL)
The study is concerned with identifying the problems of low utilization of plant and equipment by the indigenous building construction firms in Nigeria. The methodology involved the use of a well structured questionnaire complemented with an oral interview. The results revealed that (15) factors were responsible for low ...
Analysis of factors affecting the technical efficiency of cocoa ...
African Journals Online (AJOL)
The study estimated the technical efficiency of cocoa producers and the socioeconomic factors influencing technical efficiency and identified the constraints to cocoa production. A multi-stage random sampling method was used to select 180 cocoa farmers who were interviewed for the study. Data on the inputs used and ...
Factors Affecting Online Groupwork Interest: A Multilevel Analysis
Du, Jianxia; Xu, Jianzhong; Fan, Xitao
2013-01-01
The purpose of the present study is to examine the personal and contextual factors that may affect students' online groupwork interest. Using the data obtained from graduate students in an online course, both student- and group-level predictors for online groupwork interest were analyzed within the framework of hierarchical linear modeling…
Statistical Analysis of the Factors Influencing the Recurrence of ...
African Journals Online (AJOL)
Objective To evaluate the risk factors influencing the recurrence of urinary bladder cancer, and to predict the probability of recurrence within two years after radical cystectomy. Patients and Methods Between 1986 and 1994, 857 patients were admitted at the Urology and Nephrology Center of Mansoura University, Egypt, ...
Analysis on Influence Factors of Adaptive Filter Acting on ANC
Zhang, Xiuqun; Zou, Liang; Ni, Guangkui; Wang, Xiaojun; Han, Tao; Zhao, Quanfu
The noise problem has become more and more serious in recent years. The adaptive filter theory which is applied in ANC [1] (active noise control) has also attracted more and more attention. In this article, the basic principle and algorithm of adaptive theory are both researched. And then the influence factor that affects its covergence rate and noise reduction is also simulated.
A Content Analysis of Protective Factors within States' Antibullying Laws
Weaver, Lori M.; Brown, James R.; Weddle, Daniel B.; Aalsma, Matthew C.
2013-01-01
State lawmakers have responded to school bullying by crafting antibullying legislation. By July 2011, 47 states enacted such laws, though varied widely in content and scope. This study systematically evaluated each state's antibullying legislation by focusing on the inclusion of individual, parental, and systemic protective factors through…
Genetic analysis of cardiovascular risk factor clustering in spontaneous hypertension
Czech Academy of Sciences Publication Activity Database
Pravenec, Michal; Zídek, Václav; Landa, Vladimír; Kostka, Vlastimil; Musilová, Alena; Kazdová, L.; Fučíková, A.; Křenová, D.; Bílá, V.; Křen, Vladimír
2000-01-01
Roč. 46, - (2000), s. 233-240 ISSN 0015-5497 R&D Projects: GA MŠk LN00A079; GA ČR GA305/00/1646; GA ČR GA301/00/1636; GA MZd NB4904 Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 0.667, year: 2000
The application of radiolysis and analysis of influencing factors
International Nuclear Information System (INIS)
Xie Fang; Ha Yiming; Wang Feng
2008-01-01
As a branch of radiation technology, radiolysis technology has been developing in recent years. The update research and application of radiolysis is briefly reviewed. The radiolysis in reducing veterinary drug residues in food, processing plant sources products and environmental management are summaried. The influencing factors or the mechanism and radiolysis products are reviewed. (authors)
Dispersive analysis of the pion transition form factor
Energy Technology Data Exchange (ETDEWEB)
Hoferichter, M. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Darmstadt (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, ExtreMe Matter Institute EMMI, Darmstadt (Germany); University of Bern, Albert Einstein Center for Fundamental Physics, Institute for Theoretical Physics, Bern (Switzerland); Kubis, B.; Niecknig, F.; Schneider, S.P. [Universitaet Bonn, Helmholtz-Institut fuer Strahlen- und Kernphysik (Theorie) and Bethe Center for Theoretical Physics, Bonn (Germany); Leupold, S. [Uppsala Universitet, Institutionen foer fysik och astronomi, Box 516, Uppsala (Sweden)
2014-11-15
We analyze the pion transition form factor using dispersion theory. We calculate the singly-virtual form factor in the time-like region based on data for the e{sup +}e{sup -} → 3π cross section, generalizing previous studies on ω, φ → 3π decays and γπ → ππ scattering, and verify our result by comparing to e{sup +}e{sup -} → π{sup 0}γ data. We perform the analytic continuation to the space-like region, predicting the poorlyconstrained space-like transition form factor below 1 GeV, and extract the slope of the form factor at vanishing momentum transfer a{sub π} = (30.7 ± 0.6) x 10{sup -3}. We derive the dispersive formalism necessary for the extension of these results to the doubly-virtual case, as required for the pion-pole contribution to hadronic light-by-light scattering in the anomalous magnetic moment of the muon. (orig.)
Sexual Opinion Survey: An Exploratory Factor Analysis with Helping Professionals
Bloom, Zachary D.; Gutierrez, Daniel; Lambie, Glenn W.
2015-01-01
Counselors and marriage and family therapists work with individuals, couples, and families on issues related to sexuality. However, clinicians may be underserving their clients by "not" having adequate training and preparation to work with clients with these presenting issues. One mitigating factor in the treatment of sexual problems is…
Developing health status index using factor analysis | Mohamad ...
African Journals Online (AJOL)
This paper intends to develop health status index among drug abuse prison inmates in Malaysia. A self-admistered questionnaire distributed to 1753 respondents. In this study, to calculate the health status index number of drug abuse inmates, descriptive and factor analyses applied. The data based on 10 indicators of ...
Analysis of Factors Contributing to Leadership Skills Development ...
African Journals Online (AJOL)
These factors account for 63.8% of leadership skills development among the students. Based on the findings, the study concludes that academic institutions provide a good avenue for grooming future leaders. It was also recommended that similar research should be carried out in African countries for comparative purpose.
MULTIVARIATE ANALYSIS OF RISK FACTORS FOR PREMATURITY IN SOUTHERN BRAZIL
Directory of Open Access Journals (Sweden)
Willian Augusto de Melo
2014-05-01
Full Text Available This study assessed the risk factors associated with preterm birth through a cross-sectional study in 4,440 newborns. Examined the factors associated between maternal sociodemographic variables (age, marital status, education and occupation, obstetric (pregnancy and delivery type and number of prenatal visits and neonatal (sex, race/color, birth weight and Apgar. Data were analyzed by multivariate logistic regression technique. Among the 480 (10.8% preterm risk factors were prevalent type of pregnancy (OR=6.48, number of prenatal visits (OR=2.09, Apgar score at first (OR=2.00 and fifth minute (OR=2.14 and birth weight (OR=31.8 indicating that these variables are directly associated with the occurrence of prematurity. The identification of risk factors should be the object of attention of health professionals and services to support effective measures to promote health to the general population; especially for women in fertile included some criteria of gestational risk.
Navigating New Horizons: An Analysis of Factors that Influence ...
African Journals Online (AJOL)
The purpose of this study was to assess the factors that influence computer literacy among university students. The study was primarily inspired by the realization that students acquire computer skills at varying levels and progress to use computers with varying proficiency despite the fact that they will be engaging in a ...
analysis of the factors influencing farmers' adoption of alley farming ...
African Journals Online (AJOL)
p2333147
The variables in equation (2) are defined and explained in Table 1. ..... varieties in Guinea – Bissau. ... productivity of agricultural systems in the West African Savanna. ... for soil fertility improvement in South and Central Benin: Analysis of.
Sequential and Biomechanical Factors Constrain Timing and Motion in Tapping
Loehr, J.D.; Palmer, C.
2009-01-01
The authors examined how timing accuracy in tapping sequences is influenced by sequential effects of preceding finger movements and biomechanical interdependencies among fingers. Skilled pianists tapped Sequences at 3 rates; in each sequence, a finger whose motion was more or less independent of
Constrained relationship agency as the risk factor for intimate ...
African Journals Online (AJOL)
gender inequality (UNDP, 2015; Southern Africa Regional .... Table 1: Weighted transactional sex consonance score items ..... gendered relationship norms and economic power dynamics from the ... may limit a woman's ability to exit a violent relationship, or .... validate and contextualize quantitative findings: A case study of.
Research: Factors that enable and constrain the internationalisation ...
African Journals Online (AJOL)
Higher education worldwide is currently shaped by globalisation and internationalisation, while African and South African (SA) highereducation institutions (HEIs) ... According to academics, there is no clear understanding or working definition of concepts and processes such as internationalisation and Africanisation as they ...
Saiti, Anna
2007-01-01
Job satisfaction is an important issue, but remains a complex one as it is difficult to measure. A wide range of factors such as the working environment, its manner of organisation, demography and individual circumstances, etc., can substantially affect the level of job satisfaction attained by individuals. Job satisfaction and the teaching…
Stuive, Ilse
2007-01-01
Confirmatieve Factor Analyse (CFA) is een vaak gebruikte methode wanneer onderzoekers een bepaalde veronderstelling hebben over de indeling van items in één of meerdere subtests en willen onderzoeken of deze indeling ook wordt ondersteund door verzamelde onderzoeksgegevens. De meest gebruikte
Subramanian, S.; Geurden, I.; Figueiredo-Silva, A.C.; Nusantoro, S.; Kaushik, S.J.; Verreth, J.A.J.; Schrama, J.W.
2013-01-01
Compromisation of food intake when confronted with diets deficient in essential amino acids is a common response of fish and other animals, but the underlying physiological factors are poorly understood. We hypothesize that oxygen consumption of fish is a possible physiological factor constraining
Empirical Analysis on Factors Affecting User Behavior in Social Commerce
Directory of Open Access Journals (Sweden)
Xu Jiayi
2017-02-01
Full Text Available [Purpose/significance] This paper aims to discover the factors affecting user behavior in the derivative situation of e-commerce, social commerce, and explore the sustainable development and related marketing advice of it. [Method/process］This paper put forward a theoretical model of factors affecting user behavior in social commerce by integrating emotional state impact into the Stimulus-Organism-Response (S-O-R framework. 277 valid samples were collected by questionnaires and PLS. [Result/conclusion］The results show that information quality and tie strength significantly affect user emotional states, while emotional states positively affect user behavior. In addition, graphic features of business information have indirect effects on user emotional states, while it has direct effect on purchase intention.
Factors Influencing Renewable Energy Production & Supply - A Global Analysis
Ali, Anika; Saqlawi, Juman Al
2016-04-01
Renewable energy is one of the key technologies through which the energy needs of the future can be met in a sustainable and carbon-neutral manner. Increasing the share of renewable energy in the total energy mix of each country is therefore a critical need. While different countries have approached this in different ways, there are some common aspects which influence the pace and effectiveness of renewable energy incorporation. This presentation looks at data and information from 34 selected countries, analyses the patterns, compares the different parameters and identifies the common factors which positively influence renewable energy incorporation. The most successful countries are analysed for their renewable energy performance against their GDP, policy/regulatory initiatives in the field of renewables, landmass, climatic conditions and population to identify the most influencing factors to bring about positive change in renewable energy share.
International Nuclear Information System (INIS)
Wilpert, B.; Maimer, H.; Loroff, C.
2000-01-01
The project's objectives are the evaluation of the reliability concerning the identification of Human Factors as contributing factors by a computer supported event analysis (CEA). CEA is a computer version of SOL (Safety through Organizational Learning). Parts of the first step were interviews with experts from the nuclear power industry and the evaluation of existing computer supported event analysis methods. This information was combined to a requirement profile for the CEA software. The next step contained the implementation of the software in an iterative process of evaluation. The completion of this project was the testing of the CEA software. As a result the testing demonstrated that it is possible to identify contributing factors with CEA validly. In addition, CEA received a very positive feedback from the experts. (orig.) [de
Institute of Scientific and Technical Information of China (English)
2011-01-01
By using factor analysis method and establishing analysis indicator system from four aspects including crop production,poultry farming,rural life and township enterprises,the difference,features,and types of factors influencing the rural environmental pollution in the hilly area in Sichuan Province,China.Results prove that the major factor influencing rural environmental pollution in the study area is livestock and poultry breeding,flowed by crop planting,rural life,and township enterprises.Hence future pollution prevention and control should set about from livestock and poultry breeding.Meanwhile,attention should be paid to the prevention and control of rural environmental pollution caused by rural life and township enterprise production.
Human factors analysis of U.S. Navy afloat mishaps
Lacy, Rex D.
1998-01-01
The effects of maritime mishaps, which include loss of life as well as environmental and economic considerations, are significant. It has been estimated that over 80percent of maritime accidents areat least partially attributable to human error. Human error has been extensively studied in a number of fields, particularly aviation. The present research involves application of the Human Factors Accident Classification System (HFACS), developed by the Naval Safety Center, to human error causal f...
An analysis of the factors affecting Marine Corps officer retention
Theilmann, Robert J.
1990-01-01
Approved for public release; distribution unlimited. This thesis examines factors which influence the retention of male, company-grade Marine Corps officers (grades O-1 to O-3) who are within their initial period of obligated service. Data used combined responses from the 1985 DoD Survey of Officer and Enlisted Personnel and the respondents' 1989 status from the officer master fine maintained by the Defense Manpower Data Center (DMDC). Logit regression was used to measure the relative impo...
Regression and kriging analysis for grid power factor estimation
Rajesh Guntaka; Harley R. Myler
2014-01-01
The measurement of power factor (PF) in electrical utility grids is a mainstay of load balancing and is also a critical element of transmission and distribution efficiency. The measurement of PF dates back to the earliest periods of electrical power distribution to public grids. In the wide-area distribution grid, measurement of current waveforms is trivial and may be accomplished at any point in the grid using a current tap transformer. However, voltage measurement requires reference to grou...
Enhanced Phosphoproteomic Profiling Workflow For Growth Factor Signaling Analysis
DEFF Research Database (Denmark)
Sylvester, Marc; Burbridge, Mike; Leclerc, Gregory
2010-01-01
Background Our understanding of complex signaling networks is still fragmentary. Isolated processes have been studied extensively but cross-talk is omnipresent and precludes intuitive predictions of signaling outcomes. The need for quantitative data on dynamic systems is apparent especially for our...... understanding of pathological processes. In our study we create and integrate data on phosphorylations that are initiated by several growth factor receptors. We present an approach for quantitative, time-resolved phosphoproteomic profiling that integrates the important contributions by phosphotyrosines. Methods...
Risk Factors for Gambling Problems: An Analysis by Gender.
Hing, Nerilee; Russell, Alex; Tolchard, Barry; Nower, Lia
2016-06-01
Differences in problem gambling rates between males and females suggest that associated risk factors vary by gender. Previous combined analyses of male and female gambling may have obscured these distinctions. This study aimed to develop separate risk factor models for gambling problems for males and for females, and identify gender-based similarities and differences. It analysed data from the largest prevalence study in Victoria Australia (N = 15,000). Analyses determined factors differentiating non-problem from at-risk gamblers separately for women and men, then compared genders using interaction terms. Separate multivariate analyses determined significant results when controlling for all others. Variables included demographics, gambling behaviour, gambling motivations, money management, and mental and physical health. Significant predictors of at-risk status amongst female gamblers included: 18-24 years old, not speaking English at home, living in a group household, unemployed or not in the workforce, gambling on private betting, electronic gaming machines (EGMs), scratch tickets or bingo, and gambling for reasons other than social reasons, to win money or for general entertainment. For males, risk factors included: 18-24 years old, not speaking English at home, low education, living in a group household, unemployed or not in the workforce, gambling on EGMs, table games, races, sports or lotteries, and gambling for reasons other than social reasons, to win money or for general entertainment. High risk groups requiring appropriate interventions comprise young adults, especially males; middle-aged female EGM gamblers; non-English speaking populations; frequent EGM, table games, race and sports gamblers; and gamblers motivated by escape.
Risk Factors for Gastrointestinal Leak after Bariatric Surgery: MBASQIP Analysis.
Alizadeh, Reza Fazl; Li, Shiri; Inaba, Colette; Penalosa, Patrick; Hinojosa, Marcelo W; Smith, Brian R; Stamos, Michael J; Nguyen, Ninh T
2018-03-30
Gastrointestinal leak remains one of the most dreaded complications in bariatric surgery. We aimed to evaluate risk factors and the impact of common perioperative interventions on the development of leak in patients who underwent laparoscopic bariatric surgery. Using the 2015 database of accredited centers, data were analyzed for patients who underwent laparoscopic sleeve gastrectomy or Roux-en-Y gastric bypass (LRYGB). Emergent, revisional, and converted cases were excluded. Multivariate logistic regression was used to analyze risk factors for leak, including provocative testing of anastomosis, surgical drain placement, and use of postoperative swallow study. Data from 133,478 patients who underwent laparoscopic sleeve gastrectomy (n = 92,495 [69.3%]) and LRYGB (n = 40,983 [30.7%]) were analyzed. Overall leak rate was 0.7% (938 of 133,478). Factors associated with increased risk for leak were oxygen dependency (adjusted odds ratio [AOR] 1.97), hypoalbuminemia (AOR 1.66), sleep apnea (AOR 1.52), hypertension (AOR 1.36), and diabetes (AOR 1.18). Compared with LRYGB, laparoscopic sleeve gastrectomy was associated with a lower risk of leak (AOR 0.52; 95% CI 0.44 to 0.61; p leak rate was higher in patients with vs without a provocative test (0.8% vs 0.4%, respectively; p leak rate was higher in patients with vs without a surgical drain placed (1.6% vs 0.4%, respectively; p leak rate was similar between patients with vs without swallow study (0.7% vs 0.7%; p = 0.50). The overall rate of gastrointestinal leak in bariatric surgery is low. Certain preoperative factors, procedural type (LRYGB), and interventions (intraoperative provocative test and surgical drain placement) were associated with a higher risk for leaks. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Analysis of factors influencing China's accession to the GPA
Meng, Ye
2008-01-01
China, with a huge market in government procurement, submitted an application to join the WTO GPA and formally began the negotiating process with the other signatories to the Agreement while the current parties were revising the 1994 GPA. International trade is not simply the outcome of market forces, of relative supply and demand. Rather, it is the result of a complex and interlocking network of bargains that are partly economic and partly political. This article discusses the main factors i...
Proteomics Analysis Reveals Previously Uncharacterized Virulence Factors in Vibrio proteolyticus
Directory of Open Access Journals (Sweden)
Ann Ray
2016-07-01
Full Text Available Members of the genus Vibrio include many pathogens of humans and marine animals that share genetic information via horizontal gene transfer. Hence, the Vibrio pan-genome carries the potential to establish new pathogenic strains by sharing virulence determinants, many of which have yet to be characterized. Here, we investigated the virulence properties of Vibrio proteolyticus, a Gram-negative marine bacterium previously identified as part of the Vibrio consortium isolated from diseased corals. We found that V. proteolyticus causes actin cytoskeleton rearrangements followed by cell lysis in HeLa cells in a contact-independent manner. In search of the responsible virulence factor involved, we determined the V. proteolyticus secretome. This proteomics approach revealed various putative virulence factors, including active type VI secretion systems and effectors with virulence toxin domains; however, these type VI secretion systems were not responsible for the observed cytotoxic effects. Further examination of the V. proteolyticus secretome led us to hypothesize and subsequently demonstrate that a secreted hemolysin, belonging to a previously uncharacterized clan of the leukocidin superfamily, was the toxin responsible for the V. proteolyticus-mediated cytotoxicity in both HeLa cells and macrophages. Clearly, there remains an armory of yet-to-be-discovered virulence factors in the Vibrio pan-genome that will undoubtedly provide a wealth of knowledge on how a pathogen can manipulate host cells.
Analysis of Factors Influencing Building Refurbishment Project Performance
Directory of Open Access Journals (Sweden)
Ishak Nurfadzillah
2018-01-01
Full Text Available Presently, the refurbishment approach becomes favourable as it creates opportunities to incorporate sustainable value with other building improvement. In this regard, this approach needs to be implemented due to the issues on overwhelming ratio of existing building to new construction, which also can contribute to the environmental problem. Refurbishment principles imply to minimize the environmental impact and upgrading the performance of an existing building to meet new requirements. In theoretically, building project’s performance has a direct bearing on related to its potential for project success. However, in refurbishment building projects, the criteria for measure are become wider because the projects are a complex and multi-dimensional which encompassing many factors which reflect to the nature of works. Therefore, this impetus could be achieve by examine the direct empirical relationship between critical success factors (CSFs and complexity factors (CFs during managing the project in relation to delivering success on project performance. The research findings will be expected as the basis of future research in establish appropriate framework that provides information on managing refurbishment building projects and enhancing the project management competency for a better-built environment.
Maturation of arteriovenous fistula: Analysis of key factors
Directory of Open Access Journals (Sweden)
Muhammad A. Siddiqui
2017-12-01
Full Text Available The growing proportion of individuals suffering from chronic kidney disease has considerable repercussions for both kidney specialists and primary care. Progressive and permanent renal failure is most frequently treated with hemodialysis. The efficiency of hemodialysis treatment relies on the functional status of vascular access. Determining the type of vascular access has prime significance for maximizing successful maturation of a fistula and avoiding surgical revision. Despite the frequency of arteriovenous fistula procedures, there are no consistent criteria applied before creation of arteriovenous fistulae. Increased prevalence and use of arteriovenous fistulae would result if there were reliable criteria to assess which arteriovenous fistulae are more likely to reach maturity without additional procedures. Published studies assessing the predictive markers of fistula maturation vary to a great extent with regard to definitions, design, study size, patient sample, and clinical factors. As a result, surgeons and specialists must decide which possible risk factors are most likely to occur, as well as which parameters to employ when evaluating the success rate of fistula development in patients awaiting the creation of permanent access. The purpose of this literature review is to discuss the role of patient factors and blood markers in the development of arteriovenous fistulae.
Analysis of psychological factors which interfere in soccer athletes’ behaviour
Directory of Open Access Journals (Sweden)
Constanza Pujals
2008-06-01
Full Text Available The aim of this study is to analyze the psychological factors which interfere in soccer athletes’s behaviour, juvenile and infant categories. 40 athletes from a soccer school in Maringá – PR were studied and the instruments used were: inventories, interviews, questionnaires and research diary. Data were collected individually and in group. Intervention occurred for 12 months through observation, evaluation and showed the following factors: motivation, anxiety, aggression and self confidence. Results pointed out that the positive emotions expressed by the athletes were good mood, happiness, relaxation, interest in improving and hope while negative emotions were anxiety, rage, aggressiveness, low self-confidence, lack of motivation, insecurity, feeling of failure, pessimism and group instability. Relatives and coach were also generating factors of stress and anxiety. Thus, this sporting context shows that the sports psychology seems to be highly efficient to reduce anxiety and agression indexes as well as to increase motivation and self-confidence, demonstrating the importance of psychological preparation for sporting training.
Analysis of factors temporarily impacting traffic sign readability
Directory of Open Access Journals (Sweden)
Majid Khalilikhah
2016-10-01
Full Text Available Traffic sign readability can be affected by the existence of dirt on traffic sign faces. However, among damaged signs, dirty traffic signs are unique since their damage is not permanent and they just can be cleaned instead of replaced. This study aimed to identify the most important factors contributing to traffic sign dirt. To do so, a large number of traffic signs in Utah were measured by deploying a vehicle instrumented with mobile LiDAR imaging and digital photolog technologies. Each individual daytime digital image was inspected for dirt. Location and climate observations obtained from official sources were compiled using ArcGIS throughout the process. To identify contributing factors to traffic sign dirt, the chi-square test was employed. To analyze the data and rank all of the factors based on their importance to the sign dirt, Random forests statistical model was utilized. After analyzing the data, it can be concluded that ground elevation, sign mount height, and air pollution had the highest effect on making traffic signs dirty. The findings of this investigation assist transportation agencies in determining traffic signs with a higher likelihood of sign dirt. In this way, agencies would schedule to clean such traffic signs more frequently.
Analysis of Factors Influencing Building Refurbishment Project Performance
Ishak, Nurfadzillah; Aswad Ibrahim, Fazdliel; Azizi Azizan, Muhammad
2018-03-01
Presently, the refurbishment approach becomes favourable as it creates opportunities to incorporate sustainable value with other building improvement. In this regard, this approach needs to be implemented due to the issues on overwhelming ratio of existing building to new construction, which also can contribute to the environmental problem. Refurbishment principles imply to minimize the environmental impact and upgrading the performance of an existing building to meet new requirements. In theoretically, building project's performance has a direct bearing on related to its potential for project success. However, in refurbishment building projects, the criteria for measure are become wider because the projects are a complex and multi-dimensional which encompassing many factors which reflect to the nature of works. Therefore, this impetus could be achieve by examine the direct empirical relationship between critical success factors (CSFs) and complexity factors (CFs) during managing the project in relation to delivering success on project performance. The research findings will be expected as the basis of future research in establish appropriate framework that provides information on managing refurbishment building projects and enhancing the project management competency for a better-built environment.
A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis
Edwards, Michael C.
2010-01-01
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…
Factors affecting the HIV/AIDS epidemic: An ecological analysis of ...
African Journals Online (AJOL)
Factors affecting the HIV/AIDS epidemic: An ecological analysis of global data. ... Backward multiple linear regression analysis identified the proportion of Muslims, physicians density, and adolescent fertility rate are as the three most prominent factors linked with the national HIV epidemic. Conclusions: The findings support ...