WorldWideScience

Sample records for models show consistent

  1. Consistently Showing Your Best Side? Intra-individual Consistency in #Selfie Pose Orientation

    Science.gov (United States)

    Lindell, Annukka K.

    2017-01-01

    Painted and photographic portraits of others show an asymmetric bias: people favor their left cheek. Both experimental and database studies confirm that the left cheek bias extends to selfies. To date all such selfie studies have been cross-sectional; whether individual selfie-takers tend to consistently favor the same pose orientation, or switch between multiple poses, remains to be determined. The present study thus examined intra-individual consistency in selfie pose orientations. Two hundred selfie-taking participants (100 male and 100 female) were identified by searching #selfie on Instagram. The most recent 10 single-subject selfies for the each of the participants were selected and coded for type of selfie (normal; mirror) and pose orientation (left, midline, right), resulting in a sample of 2000 selfies. Results indicated that selfie-takers do tend to consistently adopt a preferred pose orientation (α = 0.72), with more participants showing an overall left cheek bias (41%) than would be expected by chance (overall right cheek bias = 31.5%; overall midline bias = 19.5%; no overall bias = 8%). Logistic regression modellng, controlling for the repeated measure of participant identity, indicated that sex did not affect pose orientation. However, selfie type proved a significant predictor when comparing left and right cheek poses, with a stronger left cheek bias for mirror than normal selfies. Overall, these novel findings indicate that selfie-takers show intra-individual consistency in pose orientation, and in addition, replicate the previously reported left cheek bias for selfies and other types of portrait, confirming that the left cheek bias also presents within individuals’ selfie corpora. PMID:28270790

  2. Self-consistent asset pricing models

    Science.gov (United States)

    Malevergne, Y.; Sornette, D.

    2007-08-01

    We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the

  3. Consistent model driven architecture

    Science.gov (United States)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  4. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  5. Migraine patients consistently show abnormal vestibular bedside tests.

    Science.gov (United States)

    Maranhão, Eliana Teixeira; Maranhão-Filho, Péricles; Luiz, Ronir Raggio; Vincent, Maurice Borges

    2016-01-01

    Migraine and vertigo are common disorders, with lifetime prevalences of 16% and 7% respectively, and co-morbidity around 3.2%. Vestibular syndromes and dizziness occur more frequently in migraine patients. We investigated bedside clinical signs indicative of vestibular dysfunction in migraineurs. To test the hypothesis that vestibulo-ocular reflex, vestibulo-spinal reflex and fall risk (FR) responses as measured by 14 bedside tests are abnormal in migraineurs without vertigo, as compared with controls. Cross-sectional study including sixty individuals - thirty migraineurs, 25 women, 19-60 y-o; and 30 gender/age healthy paired controls. Migraineurs showed a tendency to perform worse in almost all tests, albeit only the Romberg tandem test was statistically different from controls. A combination of four abnormal tests better discriminated the two groups (93.3% specificity). Migraine patients consistently showed abnormal vestibular bedside tests when compared with controls.

  6. Migraine patients consistently show abnormal vestibular bedside tests

    Directory of Open Access Journals (Sweden)

    Eliana Teixeira Maranhão

    2015-01-01

    Full Text Available Migraine and vertigo are common disorders, with lifetime prevalences of 16% and 7% respectively, and co-morbidity around 3.2%. Vestibular syndromes and dizziness occur more frequently in migraine patients. We investigated bedside clinical signs indicative of vestibular dysfunction in migraineurs.Objective To test the hypothesis that vestibulo-ocular reflex, vestibulo-spinal reflex and fall risk (FR responses as measured by 14 bedside tests are abnormal in migraineurs without vertigo, as compared with controls.Method Cross-sectional study including sixty individuals – thirty migraineurs, 25 women, 19-60 y-o; and 30 gender/age healthy paired controls.Results Migraineurs showed a tendency to perform worse in almost all tests, albeit only the Romberg tandem test was statistically different from controls. A combination of four abnormal tests better discriminated the two groups (93.3% specificity.Conclusion Migraine patients consistently showed abnormal vestibular bedside tests when compared with controls.

  7. Standard Model Vacuum Stability and Weyl Consistency Conditions

    DEFF Research Database (Denmark)

    Antipin, Oleg; Gillioz, Marc; Krog, Jens

    2013-01-01

    At high energy the standard model possesses conformal symmetry at the classical level. This is reflected at the quantum level by relations between the different beta functions of the model. These relations are known as the Weyl consistency conditions. We show that it is possible to satisfy them...... order by order in perturbation theory, provided that a suitable coupling constant counting scheme is used. As a direct phenomenological application, we study the stability of the standard model vacuum at high energies and compare with previous computations violating the Weyl consistency conditions....

  8. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  9. Consistent three-equation model for thin films

    Science.gov (United States)

    Richard, Gael; Gisclon, Marguerite; Ruyer-Quil, Christian; Vila, Jean-Paul

    2017-11-01

    Numerical simulations of thin films of newtonian fluids down an inclined plane use reduced models for computational cost reasons. These models are usually derived by averaging over the fluid depth the physical equations of fluid mechanics with an asymptotic method in the long-wave limit. Two-equation models are based on the mass conservation equation and either on the momentum balance equation or on the work-energy theorem. We show that there is no two-equation model that is both consistent and theoretically coherent and that a third variable and a three-equation model are required to solve all theoretical contradictions. The linear and nonlinear properties of two and three-equation models are tested on various practical problems. We present a new consistent three-equation model with a simple mathematical structure which allows an easy and reliable numerical resolution. The numerical calculations agree fairly well with experimental measurements or with direct numerical resolutions for neutral stability curves, speed of kinematic waves and of solitary waves and depth profiles of wavy films. The model can also predict the flow reversal at the first capillary trough ahead of the main wave hump.

  10. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  11. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  12. High-performance speech recognition using consistency modeling

    Science.gov (United States)

    Digalakis, Vassilios; Murveit, Hy; Monaco, Peter; Neumeyer, Leo; Sankar, Ananth

    1994-12-01

    The goal of SRI's consistency modeling project is to improve the raw acoustic modeling component of SRI's DECIPHER speech recognition system and develop consistency modeling technology. Consistency modeling aims to reduce the number of improper independence assumptions used in traditional speech recognition algorithms so that the resulting speech recognition hypotheses are more self-consistent and, therefore, more accurate. At the initial stages of this effort, SRI focused on developing the appropriate base technologies for consistency modeling. We first developed the Progressive Search technology that allowed us to perform large-vocabulary continuous speech recognition (LVCSR) experiments. Since its conception and development at SRI, this technique has been adopted by most laboratories, including other ARPA contracting sites, doing research on LVSR. Another goal of the consistency modeling project is to attack difficult modeling problems, when there is a mismatch between the training and testing phases. Such mismatches may include outlier speakers, different microphones and additive noise. We were able to either develop new, or transfer and evaluate existing, technologies that adapted our baseline genonic HMM recognizer to such difficult conditions.

  13. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  14. Simplified models for dark matter face their consistent completions

    Energy Technology Data Exchange (ETDEWEB)

    Gonçalves, Dorival; Machado, Pedro A. N.; No, Jose Miguel

    2017-03-01

    Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.

  15. Diagnosing a Strong-Fault Model by Conflict and Consistency.

    Science.gov (United States)

    Zhang, Wenfeng; Zhao, Qi; Zhao, Hongbo; Zhou, Gan; Feng, Wenquan

    2018-03-29

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model's prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain-the heat control unit of a spacecraft-where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  16. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  17. Diagnosing a Strong-Fault Model by Conflict and Consistency

    Directory of Open Access Journals (Sweden)

    Wenfeng Zhang

    2018-03-01

    Full Text Available The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF. Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  18. Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems

    KAUST Repository

    Garg, Vikram V

    2014-09-27

    Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.

  19. Detection and quantification of flow consistency in business process models.

    Science.gov (United States)

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara

    2018-01-01

    Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.

  20. Toward a consistent model for glass dissolution

    International Nuclear Information System (INIS)

    Strachan, D.M.; McGrail, B.P.; Bourcier, W.L.

    1994-01-01

    Understanding the process of glass dissolution in aqueous media has advanced significantly over the last 10 years through the efforts of many scientists around the world. Mathematical models describing the glass dissolution process have also advanced from simple empirical functions to structured models based on fundamental principles of physics, chemistry, and thermodynamics. Although borosilicate glass has been selected as the waste form for disposal of high-level wastes in at least 5 countries, there is no international consensus on the fundamental methodology for modeling glass dissolution that could be used in assessing the long term performance of waste glasses in a geologic repository setting. Each repository program is developing their own model and supporting experimental data. In this paper, we critically evaluate a selected set of these structured models and show that a consistent methodology for modeling glass dissolution processes is available. We also propose a strategy for a future coordinated effort to obtain the model input parameters that are needed for long-term performance assessments of glass in a geologic repository. (author) 4 figs., tabs., 75 refs

  1. Consistency of the MLE under mixture models

    OpenAIRE

    Chen, Jiahua

    2016-01-01

    The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...

  2. A CVAR scenario for a standard monetary model using theory-consistent expectations

    DEFF Research Database (Denmark)

    Juselius, Katarina

    2017-01-01

    A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination and shows that all assumptions about the model's shock structure and steady...

  3. Modeling a Consistent Behavior of PLC-Sensors

    Directory of Open Access Journals (Sweden)

    E. V. Kuzmin

    2014-01-01

    Full Text Available The article extends the cycle of papers dedicated to programming and verificatoin of PLC-programs by LTL-specification. This approach provides the availability of correctness analysis of PLC-programs by the model checking method.The model checking method needs to construct a finite model of a PLC program. For successful verification of required properties it is important to take into consideration that not all combinations of input signals from the sensors can occur while PLC works with a control object. This fact requires more advertence to the construction of the PLC-program model.In this paper we propose to describe a consistent behavior of sensors by three groups of LTL-formulas. They will affect the program model, approximating it to the actual behavior of the PLC program. The idea of LTL-requirements is shown by an example.A PLC program is a description of reactions on input signals from sensors, switches and buttons. In constructing a PLC-program model, the approach to modeling a consistent behavior of PLC sensors allows to focus on modeling precisely these reactions without an extension of the program model by additional structures for realization of a realistic behavior of sensors. The consistent behavior of sensors is taken into account only at the stage of checking a conformity of the programming model to required properties, i. e. a property satisfaction proof for the constructed model occurs with the condition that the model contains only such executions of the program that comply with the consistent behavior of sensors.

  4. Self-consistent assessment of Englert-Schwinger model on atomic properties

    Science.gov (United States)

    Lehtomäki, Jouko; Lopez-Acevedo, Olga

    2017-12-01

    Our manuscript investigates a self-consistent solution of the statistical atom model proposed by Berthold-Georg Englert and Julian Schwinger (the ES model) and benchmarks it against atomic Kohn-Sham and two orbital-free models of the Thomas-Fermi-Dirac (TFD)-λvW family. Results show that the ES model generally offers the same accuracy as the well-known TFD-1/5 vW model; however, the ES model corrects the failure in the Pauli potential near-nucleus region. We also point to the inability of describing low-Z atoms as the foremost concern in improving the present model.

  5. Detection and quantification of flow consistency in business process models

    DEFF Research Database (Denmark)

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel

    2017-01-01

    , to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics......Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect......, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second...

  6. A Time consistent model for monetary value of man-sievert

    International Nuclear Information System (INIS)

    Na, S.H.; Kim, Sun G.

    2008-01-01

    Full text: Performing a cost-benefit analysis to establish optimum levels of radiation protection under the ALARA principle, we introduce a discrete stepwise model to evaluate man-sievert monetary value of Korea. The model formula, which is unique and country-specific, is composed of GDP, the nominal risk coefficient for cancer and hereditary effects, the aversion factor against radiation exposure, and the average life expectancy. Unlike previous researches on alpha-value assessment, we showed different alpha values optimized with respect to various ranges of individual dose, which would be more realistic and applicable to the radiation protection area. Employing economically constant term of GDP we showed the real values of man-sievert by year, which should be consistent in time series comparison even under price level fluctuation. GDP deflators of an economy have to be applied to measure one's own consistent value of radiation protection by year. In addition, we recommend that the concept of purchasing power parity should be adopted if it needs international comparison of alpha values in real terms. Finally, we explain the way that this stepwise model can be generalized simply to other countries without normalizing any country-specific factors. (author)

  7. Financial model calibration using consistency hints.

    Science.gov (United States)

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  8. Self-consistent nonlinearly polarizable shell-model dynamics for ferroelectric materials

    International Nuclear Information System (INIS)

    Mkam Tchouobiap, S.E.; Kofane, T.C.; Ngabireng, C.M.

    2002-11-01

    We investigate the dynamical properties of the polarizable shellmodel with a symmetric double Morse-type electron-ion interaction in one ionic species. A variational calculation based on the Self-Consistent Einstein Model (SCEM) shows that a theoretical ferroelectric (FE) transition temperature can be derive which demonstrates the presence of a first-order phase transition for the potassium selenate (K 2 SeO 4 ) crystal around Tc 91.5 K. Comparison of the model calculation with the experimental critical temperature yields satisfactory agreement. (author)

  9. Consistently violating the non-Gaussian consistency relation

    International Nuclear Information System (INIS)

    Mooij, Sander; Palma, Gonzalo A.

    2015-01-01

    Non-attractor models of inflation are characterized by the super-horizon evolution of curvature perturbations, introducing a violation of the non-Gaussian consistency relation between the bispectrum's squeezed limit and the power spectrum's spectral index. In this work we show that the bispectrum's squeezed limit of non-attractor models continues to respect a relation dictated by the evolution of the background. We show how to derive this relation using only symmetry arguments, without ever needing to solve the equations of motion for the perturbations

  10. Self-consistent model of confinement

    International Nuclear Information System (INIS)

    Swift, A.R.

    1988-01-01

    A model of the large-spatial-distance, zero--three-momentum, limit of QCD is developed from the hypothesis that there is an infrared singularity. Single quarks and gluons do not propagate because they have infinite energy after renormalization. The Hamiltonian formulation of the path integral is used to quantize QCD with physical, nonpropagating fields. Perturbation theory in the infrared limit is simplified by the absence of self-energy insertions and by the suppression of large classes of diagrams due to vanishing propagators. Remaining terms in the perturbation series are resummed to produce a set of nonlinear, renormalizable integral equations which fix both the confining interaction and the physical propagators. Solutions demonstrate the self-consistency of the concepts of an infrared singularity and nonpropagating fields. The Wilson loop is calculated to provide a general proof of confinement. Bethe-Salpeter equations for quark-antiquark pairs and for two gluons have finite-energy solutions in the color-singlet channel. The choice of gauge is addressed in detail. Large classes of corrections to the model are discussed and shown to support self-consistency

  11. Full self-consistency versus quasiparticle self-consistency in diagrammatic approaches: exactly solvable two-site Hubbard model.

    Science.gov (United States)

    Kutepov, A L

    2015-08-12

    Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ1 from the first-order perturbation theory, and the exact vertex Γ(E)). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. The results obtained with the exact vertex are directly related to the present open question-which approximation is more advantageous for future implementations, GW + DMFT or QPGW + DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on perturbation theory (PT) systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.

  12. A self-consistent spin-diffusion model for micromagnetics

    KAUST Repository

    Abert, Claas; Ruggeri, Michele; Bruckner, Florian; Vogler, Christoph; Manchon, Aurelien; Praetorius, Dirk; Suess, Dieter

    2016-01-01

    We propose a three-dimensional micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. In contrast to previous methods, we solve for the magnetization dynamics and the electric potential in a self-consistent fashion. This treatment allows for an accurate description of magnetization dependent resistance changes. Moreover, the presented algorithm describes both spin accumulation due to smooth magnetization transitions and due to material interfaces as in multilayer structures. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. In a second experiment, the resistivity of a magnetic multilayer structure in dependence of the tilting angle of the magnetization in the different layers is investigated. Both examples show good agreement with reference simulations and experiments respectively.

  13. A self-consistent spin-diffusion model for micromagnetics

    KAUST Repository

    Abert, Claas

    2016-12-17

    We propose a three-dimensional micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. In contrast to previous methods, we solve for the magnetization dynamics and the electric potential in a self-consistent fashion. This treatment allows for an accurate description of magnetization dependent resistance changes. Moreover, the presented algorithm describes both spin accumulation due to smooth magnetization transitions and due to material interfaces as in multilayer structures. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. In a second experiment, the resistivity of a magnetic multilayer structure in dependence of the tilting angle of the magnetization in the different layers is investigated. Both examples show good agreement with reference simulations and experiments respectively.

  14. Consistent model reduction of polymer chains in solution in dissipative particle dynamics: Model description

    KAUST Repository

    Moreno Chaparro, Nicolas

    2015-06-30

    We introduce a framework for model reduction of polymer chain models for dissipative particle dynamics (DPD) simulations, where the properties governing the phase equilibria such as the characteristic size of the chain, compressibility, density, and temperature are preserved. The proposed methodology reduces the number of degrees of freedom required in traditional DPD representations to model equilibrium properties of systems with complex molecules (e.g., linear polymers). Based on geometrical considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling of the simulation parameters. In order to satisfy the geometrical constraints in the reduced model we introduce bond-angle potentials that account for the changes in the chain free energy after the model reduction. Following this coarse-graining process we represent high molecular weight DPD chains (i.e., ≥200≥200 beads per chain) with a significant reduction in the number of particles required (i.e., ≥20≥20 times the original system). We show that our methodology has potential applications modeling systems of high molecular weight molecules at large scales, such as diblock copolymer and DNA.

  15. Consistent spectroscopy for a extended gauge model

    International Nuclear Information System (INIS)

    Oliveira Neto, G. de.

    1990-11-01

    The consistent spectroscopy was obtained with a Lagrangian constructed with vector fields with a U(1) group extended symmetry. As consistent spectroscopy is understood the determination of quantum physical properties described by the model in an manner independent from the possible parametrizations adopted in their description. (L.C.J.A.)

  16. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Kokholm, Thomas

    to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...

  17. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    2013-01-01

    to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...

  18. Consistent Stochastic Modelling of Meteocean Design Parameters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Sterndorff, M. J.

    2000-01-01

    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...

  19. Consistency and Reconciliation Model In Regional Development Planning

    Directory of Open Access Journals (Sweden)

    Dina Suryawati

    2016-10-01

    Full Text Available The aim of this study was to identify the problems and determine the conceptual model of regional development planning. Regional development planning is a systemic, complex and unstructured process. Therefore, this study used soft systems methodology to outline unstructured issues with a structured approach. The conceptual models that were successfully constructed in this study are a model of consistency and a model of reconciliation. Regional development planning is a process that is well-integrated with central planning and inter-regional planning documents. Integration and consistency of regional planning documents are very important in order to achieve the development goals that have been set. On the other hand, the process of development planning in the region involves technocratic system, that is, both top-down and bottom-up system of participation. Both must be balanced, do not overlap and do not dominate each other. regional, development, planning, consistency, reconciliation

  20. Consistent biokinetic models for the actinide elements

    International Nuclear Information System (INIS)

    Leggett, R.W.

    2001-01-01

    The biokinetic models for Th, Np, Pu, Am and Cm currently recommended by the International Commission on Radiological Protection (ICRP) were developed within a generic framework that depicts gradual burial of skeletal activity in bone volume, depicts recycling of activity released to blood and links excretion to retention and translocation of activity. For other actinide elements such as Ac, Pa, Bk, Cf and Es, the ICRP still uses simplistic retention models that assign all skeletal activity to bone surface and depicts one-directional flow of activity from blood to long-term depositories to excreta. This mixture of updated and older models in ICRP documents has led to inconsistencies in dose estimates and interpretation of bioassay for radionuclides with reasonably similar biokinetics. This paper proposes new biokinetic models for Ac, Pa, Bk, Cf and Es that are consistent with the updated models for Th, Np, Pu, Am and Cm. The proposed models are developed within the ICRP's generic model framework for bone-surface-seeking radionuclides, and an effort has been made to develop parameter values that are consistent with results of comparative biokinetic data on the different actinide elements. (author)

  1. Self-consistent collisional-radiative model for hydrogen atoms: Atom–atom interaction and radiation transport

    International Nuclear Information System (INIS)

    Colonna, G.; Pietanza, L.D.; D’Ammando, G.

    2012-01-01

    Graphical abstract: Self-consistent coupling between radiation, state-to-state kinetics, electron kinetics and fluid dynamics. Highlight: ► A CR model of shock-wave in hydrogen plasma has been presented. ► All equations have been coupled self-consistently. ► Non-equilibrium electron and level distributions are obtained. ► The results show non-local effects and non-equilibrium radiation. - Abstract: A collisional-radiative model for hydrogen atom, coupled self-consistently with the Boltzmann equation for free electrons, has been applied to model a shock tube. The kinetic model has been completed considering atom–atom collisions and the vibrational kinetics of the ground state of hydrogen molecules. The atomic level kinetics has been also coupled with a radiative transport equation to determine the effective adsorption and emission coefficients and non-local energy transfer.

  2. Development of a Consistent and Reproducible Porcine Scald Burn Model

    Science.gov (United States)

    Kempf, Margit; Kimble, Roy; Cuttle, Leila

    2016-01-01

    There are very few porcine burn models that replicate scald injuries similar to those encountered by children. We have developed a robust porcine burn model capable of creating reproducible scald burns for a wide range of burn conditions. The study was conducted with juvenile Large White pigs, creating replicates of burn combinations; 50°C for 1, 2, 5 and 10 minutes and 60°C, 70°C, 80°C and 90°C for 5 seconds. Visual wound examination, biopsies and Laser Doppler Imaging were performed at 1, 24 hours and at 3 and 7 days post-burn. A consistent water temperature was maintained within the scald device for long durations (49.8 ± 0.1°C when set at 50°C). The macroscopic and histologic appearance was consistent between replicates of burn conditions. For 50°C water, 10 minute duration burns showed significantly deeper tissue injury than all shorter durations at 24 hours post-burn (p ≤ 0.0001), with damage seen to increase until day 3 post-burn. For 5 second duration burns, by day 7 post-burn the 80°C and 90°C scalds had damage detected significantly deeper in the tissue than the 70°C scalds (p ≤ 0.001). A reliable and safe model of porcine scald burn injury has been successfully developed. The novel apparatus with continually refreshed water improves consistency of scald creation for long exposure times. This model allows the pathophysiology of scald burn wound creation and progression to be examined. PMID:27612153

  3. A non-parametric consistency test of the ΛCDM model with Planck CMB data

    Energy Technology Data Exchange (ETDEWEB)

    Aghamousa, Amir; Shafieloo, Arman [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)

    2017-09-01

    Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation of the base ΛCDM model as cosmology's gold standard.

  4. Self-consistent field model for strong electrostatic correlations and inhomogeneous dielectric media.

    Science.gov (United States)

    Ma, Manman; Xu, Zhenli

    2014-12-28

    Electrostatic correlations and variable permittivity of electrolytes are essential for exploring many chemical and physical properties of interfaces in aqueous solutions. We propose a continuum electrostatic model for the treatment of these effects in the framework of the self-consistent field theory. The model incorporates a space- or field-dependent dielectric permittivity and an excluded ion-size effect for the correlation energy. This results in a self-energy modified Poisson-Nernst-Planck or Poisson-Boltzmann equation together with state equations for the self energy and the dielectric function. We show that the ionic size is of significant importance in predicting a finite self energy for an ion in an inhomogeneous medium. Asymptotic approximation is proposed for the solution of a generalized Debye-Hückel equation, which has been shown to capture the ionic correlation and dielectric self energy. Through simulating ionic distribution surrounding a macroion, the modified self-consistent field model is shown to agree with particle-based Monte Carlo simulations. Numerical results for symmetric and asymmetric electrolytes demonstrate that the model is able to predict the charge inversion at high correlation regime in the presence of multivalent interfacial ions which is beyond the mean-field theory and also show strong effect to double layer structure due to the space- or field-dependent dielectric permittivity.

  5. Self-consistent field model for strong electrostatic correlations and inhomogeneous dielectric media

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Manman, E-mail: mmm@sjtu.edu.cn; Xu, Zhenli, E-mail: xuzl@sjtu.edu.cn [Department of Mathematics, Institute of Natural Sciences, and MoE Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2014-12-28

    Electrostatic correlations and variable permittivity of electrolytes are essential for exploring many chemical and physical properties of interfaces in aqueous solutions. We propose a continuum electrostatic model for the treatment of these effects in the framework of the self-consistent field theory. The model incorporates a space- or field-dependent dielectric permittivity and an excluded ion-size effect for the correlation energy. This results in a self-energy modified Poisson-Nernst-Planck or Poisson-Boltzmann equation together with state equations for the self energy and the dielectric function. We show that the ionic size is of significant importance in predicting a finite self energy for an ion in an inhomogeneous medium. Asymptotic approximation is proposed for the solution of a generalized Debye-Hückel equation, which has been shown to capture the ionic correlation and dielectric self energy. Through simulating ionic distribution surrounding a macroion, the modified self-consistent field model is shown to agree with particle-based Monte Carlo simulations. Numerical results for symmetric and asymmetric electrolytes demonstrate that the model is able to predict the charge inversion at high correlation regime in the presence of multivalent interfacial ions which is beyond the mean-field theory and also show strong effect to double layer structure due to the space- or field-dependent dielectric permittivity.

  6. Self-consistency in the phonon space of the particle-phonon coupling model

    Science.gov (United States)

    Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.

    2018-04-01

    In the paper the nonlinear generalization of the time blocking approximation (TBA) is presented. The TBA is one of the versions of the extended random-phase approximation (RPA) developed within the Green-function method and the particle-phonon coupling model. In the generalized version of the TBA the self-consistency principle is extended onto the phonon space of the model. The numerical examples show that this nonlinear version of the TBA leads to the convergence of results with respect to enlarging the phonon space of the model.

  7. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    observed properties of variance swap dynamics and allows for jumps in volatility and returns. An affine specification using L´evy processes as building blocks leads to analytically tractable pricing formulas for options on variance swaps as well as efficient numerical methods for pricing of European......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... options on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options...

  8. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  9. Alfven-wave particle interaction in finite-dimensional self-consistent field model

    International Nuclear Information System (INIS)

    Padhye, N.; Horton, W.

    1998-01-01

    A low-dimensional Hamiltonian model is derived for the acceleration of ions in finite amplitude Alfven waves in a finite pressure plasma sheet. The reduced low-dimensional wave-particle Hamiltonian is useful for describing the reaction of the accelerated ions on the wave amplitudes and phases through the self-consistent fields within the envelope approximation. As an example, the authors show for a single Alfven wave in the central plasma sheet of the Earth's geotail, modeled by the linear pinch geometry called the Harris sheet, the time variation of the wave amplitude during the acceleration of fast protons

  10. Self-consistent one-gluon exchange in soliton bag models

    International Nuclear Information System (INIS)

    Dodd, L.R.; Adelaide Univ.; Williams, A.G.

    1988-01-01

    The treatment of soliton bag models as two-point boundary value problems is extended to include self-consistent one-gluon exchange interactions. The colour-magnetic contribution to the nucleon-delta mass splitting is calculated self-consistently in the mean-field, one-gluon-exchange approximation for the Friedberg-Lee and Nielsen-Patkos models. Small glueball mass parameters (m GB ∝ 500 MeV) are favoured. Comparisons with previous calculations are made. (orig.)

  11. Quantum self-consistency of AdSxΣ brane models

    International Nuclear Information System (INIS)

    Flachi, Antonino; Pujolas, Oriol

    2003-01-01

    Continuing our previous work, we consider a class of higher dimensional brane models with the topology of AdS D 1 +1 xΣ, where Σ is a one-parameter compact manifold and two branes of codimension one are located at the orbifold fixed points. We consider a setup where such a solution arises from Einstein-Yang-Mills theory and evaluate the one-loop effective potential induced by gauge fields and by a generic bulk scalar field. We show that this type of brane model resolves the gauge hierarchy between the Planck and electroweak scales through redshift effects due to the warp factor a=e -πkr . The value of a is then fixed by minimizing the effective potential. We find that, as in the Randall-Sundrum case, the gauge field contribution to the effective potential stabilizes the hierarchy without fine-tuning as long as the Laplacian Δ Σ on Σ has a zero eigenvalue. Scalar fields can stabilize the hierarchy depending on the mass and the nonminimal coupling. We also address the quantum self-consistency of the solution, showing that the classical brane solution is not spoiled by quantum effects

  12. Development of a Model for Dynamic Recrystallization Consistent with the Second Derivative Criterion

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2017-11-01

    Full Text Available Dynamic recrystallization (DRX processes are widely used in industrial hot working operations, not only to keep the forming forces low but also to control the microstructure and final properties of the workpiece. According to the second derivative criterion (SDC by Poliak and Jonas, the onset of DRX can be detected from an inflection point in the strain-hardening rate as a function of flow stress. Various models are available that can predict the evolution of flow stress from incipient plastic flow up to steady-state deformation in the presence of DRX. Some of these models have been implemented into finite element codes and are widely used for the design of metal forming processes, but their consistency with the SDC has not been investigated. This work identifies three sources of inconsistencies that models for DRX may exhibit. For a consistent modeling of the DRX kinetics, a new strain-hardening model for the hardening stages III to IV is proposed and combined with consistent recrystallization kinetics. The model is devised in the Kocks-Mecking space based on characteristic transition in the strain-hardening rate. A linear variation of the transition and inflection points is observed for alloy 800H at all tested temperatures and strain rates. The comparison of experimental and model results shows that the model is able to follow the course of the strain-hardening rate very precisely, such that highly accurate flow stress predictions are obtained.

  13. Consistent model reduction of polymer chains in solution in dissipative particle dynamics: Model description

    KAUST Repository

    Moreno Chaparro, Nicolas; Nunes, Suzana Pereira; Calo, Victor M.

    2015-01-01

    considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling

  14. A new k-epsilon model consistent with Monin-Obukhov similarity theory

    DEFF Research Database (Denmark)

    van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.

    2017-01-01

    A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...

  15. Thermodynamically consistent model calibration in chemical kinetics

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2011-05-01

    Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new

  16. Consistent ranking of volatility models

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2006-01-01

    We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...... variance in out-of-sample evaluations rather than the squared return. We derive the theoretical results in a general framework that is not specific to the comparison of volatility models. Similar problems can arise in comparisons of forecasting models whenever the predicted variable is a latent variable....

  17. A novel mouse model carrying a human cytoplasmic dynein mutation shows motor behavior deficits consistent with Charcot-Marie-Tooth type 2O disease.

    Science.gov (United States)

    Sabblah, Thywill T; Nandini, Swaran; Ledray, Aaron P; Pasos, Julio; Calderon, Jami L Conley; Love, Rachal; King, Linda E; King, Stephen J

    2018-01-29

    Charcot-Marie-Tooth disease (CMT) is a peripheral neuromuscular disorder in which axonal degeneration causes progressive loss of motor and sensory nerve function. The loss of motor nerve function leads to distal muscle weakness and atrophy, resulting in gait problems and difficulties with walking, running, and balance. A mutation in the cytoplasmic dynein heavy chain (DHC) gene was discovered to cause an autosomal dominant form of the disease designated Charcot-Marie-Tooth type 2 O disease (CMT2O) in 2011. The mutation is a single amino acid change of histidine into arginine at amino acid 306 (H306R) in DHC. In order to understand the onset and progression of CMT2, we generated a knock-in mouse carrying the corresponding CMT2O mutation (H304R/+). We examined H304R/+ mouse cohorts in a 12-month longitudinal study of grip strength, tail suspension, and rotarod assays. H304R/+ mice displayed distal muscle weakness and loss of motor coordination phenotypes consistent with those of individuals with CMT2. Analysis of the gastrocnemius of H304R/+ male mice showed prominent defects in neuromuscular junction (NMJ) morphology including reduced size, branching, and complexity. Based on these results, the H304R/+ mouse will be an important model for uncovering functions of dynein in complex organisms, especially related to CMT onset and progression.

  18. A self-consistent model for thermodynamics of multicomponent solid solutions

    International Nuclear Information System (INIS)

    Svoboda, J.; Fischer, F.D.

    2016-01-01

    The self-consistent concept recently published in this journal (108, 27–30, 2015) is extended from a binary to a multicomponent system. This is possible by exploiting the trapping concept as basis for including the interaction of atoms in terms of pairs (e.g. A–A, B–B, C–C…) and couples (e.g. A–B, B–C, …) in a multicomponent system with A as solvent and B, C, … as dilute solutes. The model results in a formulation of Gibbs-energy, which can be minimized. Examples show that the couple and pair formation may influence the equilibrium Gibbs energy markedly.

  19. Consistent Conformal Extensions of the Standard Model arXiv

    CERN Document Server

    Loebbert, Florian; Plefka, Jan

    The question of whether classically conformal modifications of the standard model are consistent with experimental obervations has recently been subject to renewed interest. The method of Gildener and Weinberg provides a natural framework for the study of the effective potential of the resulting multi-scalar standard model extensions. This approach relies on the assumption of the ordinary loop hierarchy $\\lambda_\\text{s} \\sim g^2_\\text{g}$ of scalar and gauge couplings. On the other hand, Andreassen, Frost and Schwartz recently argued that in the (single-scalar) standard model, gauge invariant results require the consistent scaling $\\lambda_\\text{s} \\sim g^4_\\text{g}$. In the present paper we contrast these two hierarchy assumptions and illustrate the differences in the phenomenological predictions of minimal conformal extensions of the standard model.

  20. Modeling self-consistent multi-class dynamic traffic flow

    Science.gov (United States)

    Cho, Hsun-Jung; Lo, Shih-Ching

    2002-09-01

    In this study, we present a systematic self-consistent multiclass multilane traffic model derived from the vehicular Boltzmann equation and the traffic dispersion model. The multilane domain is considered as a two-dimensional space and the interaction among vehicles in the domain is described by a dispersion model. The reason we consider a multilane domain as a two-dimensional space is that the driving behavior of road users may not be restricted by lanes, especially motorcyclists. The dispersion model, which is a nonlinear Poisson equation, is derived from the car-following theory and the equilibrium assumption. Under the concept that all kinds of users share the finite section, the density is distributed on a road by the dispersion model. In addition, the dynamic evolution of the traffic flow is determined by the systematic gas-kinetic model derived from the Boltzmann equation. Multiplying Boltzmann equation by the zeroth, first- and second-order moment functions, integrating both side of the equation and using chain rules, we can derive continuity, motion and variance equation, respectively. However, the second-order moment function, which is the square of the individual velocity, is employed by previous researches does not have physical meaning in traffic flow. Although the second-order expansion results in the velocity variance equation, additional terms may be generated. The velocity variance equation we propose is derived from multiplying Boltzmann equation by the individual velocity variance. It modifies the previous model and presents a new gas-kinetic traffic flow model. By coupling the gas-kinetic model and the dispersion model, a self-consistent system is presented.

  1. Possible world based consistency learning model for clustering and classifying uncertain data.

    Science.gov (United States)

    Liu, Han; Zhang, Xianchao; Zhang, Xiaotong

    2018-06-01

    Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Topologically Consistent Models for Efficient Big Geo-Spatio Data Distribution

    Science.gov (United States)

    Jahn, M. W.; Bradley, P. E.; Doori, M. Al; Breunig, M.

    2017-10-01

    Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space) and 4D (spatial + temporal space) models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment.

  3. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  4. Self-consistent mean-field models for nuclear structure

    International Nuclear Information System (INIS)

    Bender, Michael; Heenen, Paul-Henri; Reinhard, Paul-Gerhard

    2003-01-01

    The authors review the present status of self-consistent mean-field (SCMF) models for describing nuclear structure and low-energy dynamics. These models are presented as effective energy-density functionals. The three most widely used variants of SCMF's based on a Skyrme energy functional, a Gogny force, and a relativistic mean-field Lagrangian are considered side by side. The crucial role of the treatment of pairing correlations is pointed out in each case. The authors discuss other related nuclear structure models and present several extensions beyond the mean-field model which are currently used. Phenomenological adjustment of the model parameters is discussed in detail. The performance quality of the SCMF model is demonstrated for a broad range of typical applications

  5. Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models

    KAUST Repository

    Vignal, Philippe

    2016-01-01

    of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure

  6. Aggregated wind power plant models consisting of IEC wind turbine models

    DEFF Research Database (Denmark)

    Altin, Müfit; Göksu, Ömer; Hansen, Anca Daniela

    2015-01-01

    The common practice regarding the modelling of large generation components has been to make use of models representing the performance of the individual components with a required level of accuracy and details. Owing to the rapid increase of wind power plants comprising large number of wind...... turbines, parameters and models to represent each individual wind turbine in detail makes it necessary to develop aggregated wind power plant models considering the simulation time for power system stability studies. In this paper, aggregated wind power plant models consisting of the IEC 61400-27 variable...... speed wind turbine models (type 3 and type 4) with a power plant controller is presented. The performance of the detailed benchmark wind power plant model and the aggregated model are compared by means of simulations for the specified test cases. Consequently, the results are summarized and discussed...

  7. Estimating long-term volatility parameters for market-consistent models

    African Journals Online (AJOL)

    Contemporary actuarial and accounting practices (APN 110 in the South African context) require the use of market-consistent models for the valuation of embedded investment derivatives. These models have to be calibrated with accurate and up-to-date market data. Arguably, the most important variable in the valuation of ...

  8. Self-consistent modelling of X-ray photoelectron spectra from air-exposed polycrystalline TiN thin films

    Energy Technology Data Exchange (ETDEWEB)

    Greczynski, G., E-mail: grzgr@ifm.liu.se; Hultman, L.

    2016-11-30

    Highlights: • We present first self-consistent model of TiN core level spectra with a cross-peak qualitative and quantitative agreement. • Model is tested for a series of TiN thin films oxidized to different extent by varying the venting temperature. • Conventional deconvolution process relies on reference binding energies that typically show large spread introducing ambiguity. • By imposing requirement of quantitative cross-peak self-consistency reliability of extracted chemical information is enhanced. • We propose that the cross-peak self-consistency should be a prerequisite for reliable XPS peak modelling. - Abstract: We present first self-consistent modelling of x-ray photoelectron spectroscopy (XPS) Ti 2p, N 1s, O 1s, and C 1s core level spectra with a cross-peak quantitative agreement for a series of TiN thin films grown by dc magnetron sputtering and oxidized to different extent by varying the venting temperature T{sub v} of the vacuum chamber before removing the deposited samples. So-obtained film series constitute a model case for XPS application studies, where certain degree of atmosphere exposure during sample transfer to the XPS instrument is unavoidable. The challenge is to extract information about surface chemistry without invoking destructive pre-cleaning with noble gas ions. All TiN surfaces are thus analyzed in the as-received state by XPS using monochromatic Al Kα radiation (hν = 1486.6 eV). Details of line shapes and relative peak areas obtained from deconvolution of the reference Ti 2p and N 1 s spectra representative of a native TiN surface serve as an input to model complex core level signals from air-exposed surfaces, where contributions from oxides and oxynitrides make the task very challenging considering the influence of the whole deposition process at hand. The essential part of the presented approach is that the deconvolution process is not only guided by the comparison to the reference binding energy values that often show

  9. Nonparametric test of consistency between cosmological models and multiband CMB measurements

    Energy Technology Data Exchange (ETDEWEB)

    Aghamousa, Amir [Asia Pacific Center for Theoretical Physics, Pohang, Gyeongbuk 790-784 (Korea, Republic of); Shafieloo, Arman, E-mail: amir@apctp.org, E-mail: shafieloo@kasi.re.kr [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of)

    2015-06-01

    We present a novel approach to test the consistency of the cosmological models with multiband CMB data using a nonparametric approach. In our analysis we calibrate the REACT (Risk Estimation and Adaptation after Coordinate Transformation) confidence levels associated with distances in function space (confidence distances) based on the Monte Carlo simulations in order to test the consistency of an assumed cosmological model with observation. To show the applicability of our algorithm, we confront Planck 2013 temperature data with concordance model of cosmology considering two different Planck spectra combination. In order to have an accurate quantitative statistical measure to compare between the data and the theoretical expectations, we calibrate REACT confidence distances and perform a bias control using many realizations of the data. Our results in this work using Planck 2013 temperature data put the best fit ΛCDM model at 95% (∼ 2σ) confidence distance from the center of the nonparametric confidence set while repeating the analysis excluding the Planck 217 × 217 GHz spectrum data, the best fit ΛCDM model shifts to 70% (∼ 1σ) confidence distance. The most prominent features in the data deviating from the best fit ΛCDM model seems to be at low multipoles  18 < ℓ < 26 at greater than 2σ, ℓ ∼ 750 at ∼1 to 2σ and ℓ ∼ 1800 at greater than 2σ level. Excluding the 217×217 GHz spectrum the feature at ℓ ∼ 1800 becomes substantially less significance at ∼1 to 2σ confidence level. Results of our analysis based on the new approach we propose in this work are in agreement with other analysis done using alternative methods.

  10. A semi-nonparametric mixture model for selecting functionally consistent proteins.

    Science.gov (United States)

    Yu, Lianbo; Doerge, Rw

    2010-09-28

    High-throughput technologies have led to a new era of proteomics. Although protein microarray experiments are becoming more common place there are a variety of experimental and statistical issues that have yet to be addressed, and that will carry over to new high-throughput technologies unless they are investigated. One of the largest of these challenges is the selection of functionally consistent proteins. We present a novel semi-nonparametric mixture model for classifying proteins as consistent or inconsistent while controlling the false discovery rate and the false non-discovery rate. The performance of the proposed approach is compared to current methods via simulation under a variety of experimental conditions. We provide a statistical method for selecting functionally consistent proteins in the context of protein microarray experiments, but the proposed semi-nonparametric mixture model method can certainly be generalized to solve other mixture data problems. The main advantage of this approach is that it provides the posterior probability of consistency for each protein.

  11. Consistent constraints on the Standard Model Effective Field Theory

    International Nuclear Information System (INIS)

    Berthier, Laure; Trott, Michael

    2016-01-01

    We develop the global constraint picture in the (linear) effective field theory generalisation of the Standard Model, incorporating data from detectors that operated at PEP, PETRA, TRISTAN, SpS, Tevatron, SLAC, LEPI and LEP II, as well as low energy precision data. We fit one hundred and three observables. We develop a theory error metric for this effective field theory, which is required when constraints on parameters at leading order in the power counting are to be pushed to the percent level, or beyond, unless the cut off scale is assumed to be large, Λ≳ 3 TeV. We more consistently incorporate theoretical errors in this work, avoiding this assumption, and as a direct consequence bounds on some leading parameters are relaxed. We show how an S,T analysis is modified by the theory errors we include as an illustrative example.

  12. Self-Consistent Model of Magnetospheric Electric Field, Ring Current, Plasmasphere, and Electromagnetic Ion Cyclotron Waves: Initial Results

    Science.gov (United States)

    Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.

    2009-01-01

    Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.

  13. Consistency checks in beam emission modeling for neutral beam injectors

    International Nuclear Information System (INIS)

    Punyapu, Bharathi; Vattipalle, Prahlad; Sharma, Sanjeev Kumar; Baruah, Ujjwal Kumar; Crowley, Brendan

    2015-01-01

    In positive neutral beam systems, the beam parameters such as ion species fractions, power fractions and beam divergence are routinely measured using Doppler shifted beam emission spectrum. The accuracy with which these parameters are estimated depend on the accuracy of the atomic modeling involved in these estimations. In this work, an effective procedure to check the consistency of the beam emission modeling in neutral beam injectors is proposed. As a first consistency check, at a constant beam voltage and current, the intensity of the beam emission spectrum is measured by varying the pressure in the neutralizer. Then, the scaling of measured intensity of un-shifted (target) and Doppler shifted intensities (projectile) of the beam emission spectrum at these pressure values are studied. If the un-shifted component scales with pressure, then the intensity of this component will be used as a second consistency check on the beam emission modeling. As a further check, the modeled beam fractions and emission cross sections of projectile and target are used to predict the intensity of the un-shifted component and then compared with the value of measured target intensity. An agreement between the predicted and measured target intensities provide the degree of discrepancy in the beam emission modeling. In order to test this methodology, a systematic analysis of Doppler shift spectroscopy data obtained on the JET neutral beam test stand data was carried out

  14. Showing that the race model inequality is not violated

    DEFF Research Database (Denmark)

    Gondan, Matthias; Riehl, Verena; Blurton, Steven Paul

    2012-01-01

    important being race models and coactivation models. Redundancy gains consistent with the race model have an upper limit, however, which is given by the well-known race model inequality (Miller, 1982). A number of statistical tests have been proposed for testing the race model inequality in single...... participants and groups of participants. All of these tests use the race model as the null hypothesis, and rejection of the null hypothesis is considered evidence in favor of coactivation. We introduce a statistical test in which the race model prediction is the alternative hypothesis. This test controls...

  15. Self-consistent modeling of amorphous silicon devices

    International Nuclear Information System (INIS)

    Hack, M.

    1987-01-01

    The authors developed a computer model to describe the steady-state behaviour of a range of amorphous silicon devices. It is based on the complete set of transport equations and takes into account the important role played by the continuous distribution of localized states in the mobility gap of amorphous silicon. Using one set of parameters they have been able to self-consistently simulate the current-voltage characteristics of p-i-n (or n-i-p) solar cells under illumination, the dark behaviour of field-effect transistors, p-i-n diodes and n-i-n diodes in both the ohmic and space charge limited regimes. This model also describes the steady-state photoconductivity of amorphous silicon, in particular, its dependence on temperature, doping and illumination intensity

  16. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    Science.gov (United States)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  17. Branch-based model for the diameters of the pulmonary airways: accounting for departures from self-consistency and registration errors.

    Science.gov (United States)

    Neradilek, Moni B; Polissar, Nayak L; Einstein, Daniel R; Glenny, Robb W; Minard, Kevin R; Carson, James P; Jiao, Xiangmin; Jacob, Richard E; Cox, Timothy C; Postlethwait, Edward M; Corley, Richard A

    2012-06-01

    We examine a previously published branch-based approach for modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that take account of error. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys, and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from self-consistency exist, we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. The new variance model can be used instead. Measurement error has an important impact on the estimated morphometry models and needs to be addressed in the analysis. Copyright © 2012 Wiley Periodicals, Inc.

  18. Consistency Across Standards or Standards in a New Business Model

    Science.gov (United States)

    Russo, Dane M.

    2010-01-01

    Presentation topics include: standards in a changing business model, the new National Space Policy is driving change, a new paradigm for human spaceflight, consistency across standards, the purpose of standards, danger of over-prescriptive standards, a balance is needed (between prescriptive and general standards), enabling versus inhibiting, characteristics of success-oriented standards, characteristics of success-oriented standards, and conclusions. Additional slides include NASA Procedural Requirements 8705.2B identifies human rating standards and requirements, draft health and medical standards for human rating, what's been done, government oversight models, examples of consistency from anthropometry, examples of inconsistency from air quality and appendices of government and non-governmental human factors standards.

  19. Self-Consistent Dynamical Model of the Broad Line Region

    Energy Technology Data Exchange (ETDEWEB)

    Czerny, Bozena [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Li, Yan-Rong [Key Laboratory for Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing (China); Sredzinska, Justyna; Hryniewicz, Krzysztof [Copernicus Astronomical Center, Polish Academy of Sciences, Warsaw (Poland); Panda, Swayam [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Copernicus Astronomical Center, Polish Academy of Sciences, Warsaw (Poland); Wildy, Conor [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Karas, Vladimir, E-mail: bcz@cft.edu.pl [Astronomical Institute, Czech Academy of Sciences, Prague (Czech Republic)

    2017-06-22

    We develop a self-consistent description of the Broad Line Region based on the concept of a failed wind powered by radiation pressure acting on a dusty accretion disk atmosphere in Keplerian motion. The material raised high above the disk is illuminated, dust evaporates, and the matter falls back toward the disk. This material is the source of emission lines. The model predicts the inner and outer radius of the region, the cloud dynamics under the dust radiation pressure and, subsequently, the gravitational field of the central black hole, which results in asymmetry between the rise and fall. Knowledge of the dynamics allows us to predict the shapes of the emission lines as functions of the basic parameters of an active nucleus: black hole mass, accretion rate, black hole spin (or accretion efficiency) and the viewing angle with respect to the symmetry axis. Here we show preliminary results based on analytical approximations to the cloud motion.

  20. Self-Consistent Dynamical Model of the Broad Line Region

    Directory of Open Access Journals (Sweden)

    Bozena Czerny

    2017-06-01

    Full Text Available We develop a self-consistent description of the Broad Line Region based on the concept of a failed wind powered by radiation pressure acting on a dusty accretion disk atmosphere in Keplerian motion. The material raised high above the disk is illuminated, dust evaporates, and the matter falls back toward the disk. This material is the source of emission lines. The model predicts the inner and outer radius of the region, the cloud dynamics under the dust radiation pressure and, subsequently, the gravitational field of the central black hole, which results in asymmetry between the rise and fall. Knowledge of the dynamics allows us to predict the shapes of the emission lines as functions of the basic parameters of an active nucleus: black hole mass, accretion rate, black hole spin (or accretion efficiency and the viewing angle with respect to the symmetry axis. Here we show preliminary results based on analytical approximations to the cloud motion.

  1. Consistency of the tachyon warm inflationary universe models

    International Nuclear Information System (INIS)

    Zhang, Xiao-Min; Zhu, Jian-Yang

    2014-01-01

    This study concerns the consistency of the tachyon warm inflationary models. A linear stability analysis is performed to find the slow-roll conditions, characterized by the potential slow-roll (PSR) parameters, for the existence of a tachyon warm inflationary attractor in the system. The PSR parameters in the tachyon warm inflationary models are redefined. Two cases, an exponential potential and an inverse power-law potential, are studied, when the dissipative coefficient Γ = Γ 0 and Γ = Γ(φ), respectively. A crucial condition is obtained for a tachyon warm inflationary model characterized by the Hubble slow-roll (HSR) parameter ε H , and the condition is extendable to some other inflationary models as well. A proper number of e-folds is obtained in both cases of the tachyon warm inflation, in contrast to existing works. It is also found that a constant dissipative coefficient (Γ = Γ 0 ) is usually not a suitable assumption for a warm inflationary model

  2. Consistency, Verification, and Validation of Turbulence Models for Reynolds-Averaged Navier-Stokes Applications

    Science.gov (United States)

    Rumsey, Christopher L.

    2009-01-01

    In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.

  3. Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.

    Science.gov (United States)

    Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P

    2015-10-01

    Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.

  4. Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems

    KAUST Repository

    Garg, Vikram V; Prudhomme, Serge; van der Zee, Kris G; Carey, Graham F

    2014-01-01

    Models based on the Helmholtz `slip' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint

  5. Toward a consistent modeling framework to assess multi-sectoral climate impacts.

    Science.gov (United States)

    Monier, Erwan; Paltsev, Sergey; Sokolov, Andrei; Chen, Y-H Henry; Gao, Xiang; Ejaz, Qudsia; Couzo, Evan; Schlosser, C Adam; Dutkiewicz, Stephanie; Fant, Charles; Scott, Jeffery; Kicklighter, David; Morris, Jennifer; Jacoby, Henry; Prinn, Ronald; Haigh, Martin

    2018-02-13

    Efforts to estimate the physical and economic impacts of future climate change face substantial challenges. To enrich the currently popular approaches to impact analysis-which involve evaluation of a damage function or multi-model comparisons based on a limited number of standardized scenarios-we propose integrating a geospatially resolved physical representation of impacts into a coupled human-Earth system modeling framework. Large internationally coordinated exercises cannot easily respond to new policy targets and the implementation of standard scenarios across models, institutions and research communities can yield inconsistent estimates. Here, we argue for a shift toward the use of a self-consistent integrated modeling framework to assess climate impacts, and discuss ways the integrated assessment modeling community can move in this direction. We then demonstrate the capabilities of such a modeling framework by conducting a multi-sectoral assessment of climate impacts under a range of consistent and integrated economic and climate scenarios that are responsive to new policies and business expectations.

  6. A self-consistent upward leader propagation model

    International Nuclear Information System (INIS)

    Becerra, Marley; Cooray, Vernon

    2006-01-01

    The knowledge of the initiation and propagation of an upward moving connecting leader in the presence of a downward moving lightning stepped leader is a must in the determination of the lateral attraction distance of a lightning flash by any grounded structure. Even though different models that simulate this phenomenon are available in the literature, they do not take into account the latest developments in the physics of leader discharges. The leader model proposed here simulates the advancement of positive upward leaders by appealing to the presently understood physics of that process. The model properly simulates the upward continuous progression of the positive connecting leaders from its inception to the final connection with the downward stepped leader (final jump). Thus, the main physical properties of upward leaders, namely the charge per unit length, the injected current, the channel gradient and the leader velocity are self-consistently obtained. The obtained results are compared with an altitude triggered lightning experiment and there is good agreement between the model predictions and the measured leader current and the experimentally inferred spatial and temporal location of the final jump. It is also found that the usual assumption of constant charge per unit length, based on laboratory experiments, is not valid for lightning upward connecting leaders

  7. Traffic Multiresolution Modeling and Consistency Analysis of Urban Expressway Based on Asynchronous Integration Strategy

    Directory of Open Access Journals (Sweden)

    Liyan Zhang

    2017-01-01

    Full Text Available The paper studies multiresolution traffic flow simulation model of urban expressway. Firstly, compared with two-level hybrid model, three-level multiresolution hybrid model has been chosen. Then, multiresolution simulation framework and integration strategies are introduced. Thirdly, the paper proposes an urban expressway multiresolution traffic simulation model by asynchronous integration strategy based on Set Theory, which includes three submodels: macromodel, mesomodel, and micromodel. After that, the applicable conditions and derivation process of the three submodels are discussed in detail. In addition, in order to simulate and evaluate the multiresolution model, “simple simulation scenario” of North-South Elevated Expressway in Shanghai has been established. The simulation results showed the following. (1 Volume-density relationships of three submodels are unanimous with detector data. (2 When traffic density is high, macromodel has a high precision and smaller error and the dispersion of results is smaller. Compared with macromodel, simulation accuracies of micromodel and mesomodel are lower but errors are bigger. (3 Multiresolution model can simulate characteristics of traffic flow, capture traffic wave, and keep the consistency of traffic state transition. Finally, the results showed that the novel multiresolution model can have higher simulation accuracy and it is feasible and effective in the real traffic simulation scenario.

  8. Simulation of recrystallization textures in FCC materials based on a self consistent model

    International Nuclear Information System (INIS)

    Bolmaro, R.E; Roatta, A; Fourty, A.L; Signorelli, J.W; Bertinetti, M.A

    2004-01-01

    The development of re-crystallization textures in FCC polycrystalline materials has been a long lasting scientific problem. The appearance of the so-called cubic component in high stack fault energy laminated FCC materials is not an entirely understood phenomenon. This work approaches the problem using a self- consistent simulation technique of homogenization. The information on first preferential neighbors is used in the model to consider grain boundary energies and intra granular misorientations and to treat the growth of grains and the mobility of the grain boundary. The energies accumulated by deformations are taken as conducting energies of the nucleation and the later growth is statistically governed by the grain boundary energies. The model shows the correct trend for re-crystallization textures obtained from previously simulated deformation textures for high and low stack fault energy FCC materials. The model's topological representation is discussed (CW)

  9. Self-consistent atmosphere modeling with cloud formation for low-mass stars and exoplanets

    Science.gov (United States)

    Juncher, Diana; Jørgensen, Uffe G.; Helling, Christiane

    2017-12-01

    Context. Low-mass stars and extrasolar planets have ultra-cool atmospheres where a rich chemistry occurs and clouds form. The increasing amount of spectroscopic observations for extrasolar planets requires self-consistent model atmosphere simulations to consistently include the formation processes that determine cloud formation and their feedback onto the atmosphere. Aims: Our aim is to complement the MARCS model atmosphere suit with simulations applicable to low-mass stars and exoplanets in preparation of E-ELT, JWST, PLATO and other upcoming facilities. Methods: The MARCS code calculates stellar atmosphere models, providing self-consistent solutions of the radiative transfer and the atmospheric structure and chemistry. We combine MARCS with a kinetic model that describes cloud formation in ultra-cool atmospheres (seed formation, growth/evaporation, gravitational settling, convective mixing, element depletion). Results: We present a small grid of self-consistently calculated atmosphere models for Teff = 2000-3000 K with solar initial abundances and log (g) = 4.5. Cloud formation in stellar and sub-stellar atmospheres appears for Teff day-night energy transport and no temperature inversion.

  10. Self-Consistent Atmosphere Models of the Most Extreme Hot Jupiters

    Science.gov (United States)

    Lothringer, Joshua; Barman, Travis

    2018-01-01

    We present a detailed look at self-consistent PHOENIX atmosphere models of the most highly irradiated hot Jupiters known to exist. These hot Jupiters typically have equilibrium temperatures approaching and sometimes exceeding 3000 K, orbiting A, F, and early-G type stars on orbits less than 0.03 AU (10x closer than Mercury is to the Sun). The most extreme example, KELT-9b, is the hottest known hot Jupiter with a measured dayside temperature of 4600 K. Many of the planets we model have recently attracted attention with high profile discoveries, including temperature inversions in WASP-33b and WASP-121, changing phase curve offsets possibly caused by magnetohydrodymanic effects in HAT-P-7b, and TiO in WASP-19b. Our modeling provides a look at the a priori expectations for these planets and helps us understand these recent discoveries. We show that, in the hottest cases, all molecules are dissociated down to relatively high pressures. These planets may have detectable temperature inversions, more akin to thermospheres than stratospheres in that an optical absorber like TiO or VO is not needed. Instead, the inversions are created by a lack of cooling in the IR combined with heating from atoms and ions at UV and blue optical wavelengths. We also reevaluate some of the assumptions that have been made in retrieval analyses of these planets.

  11. Consistent probabilities in loop quantum cosmology

    International Nuclear Information System (INIS)

    Craig, David A; Singh, Parampreet

    2013-01-01

    A fundamental issue for any quantum cosmological theory is to specify how probabilities can be assigned to various quantum events or sequences of events such as the occurrence of singularities or bounces. In previous work, we have demonstrated how this issue can be successfully addressed within the consistent histories approach to quantum theory for Wheeler–DeWitt-quantized cosmological models. In this work, we generalize that analysis to the exactly solvable loop quantization of a spatially flat, homogeneous and isotropic cosmology sourced with a massless, minimally coupled scalar field known as sLQC. We provide an explicit, rigorous and complete decoherent-histories formulation for this model and compute the probabilities for the occurrence of a quantum bounce versus a singularity. Using the scalar field as an emergent internal time, we show for generic states that the probability for a singularity to occur in this model is zero, and that of a bounce is unity, complementing earlier studies of the expectation values of the volume and matter density in this theory. We also show from the consistent histories point of view that all states in this model, whether quantum or classical, achieve arbitrarily large volume in the limit of infinite ‘past’ or ‘future’ scalar ‘time’, in the sense that the wave function evaluated at any arbitrary fixed value of the volume vanishes in that limit. Finally, we briefly discuss certain misconceptions concerning the utility of the consistent histories approach in these models. (paper)

  12. A paradigm shift toward a consistent modeling framework to assess climate impacts

    Science.gov (United States)

    Monier, E.; Paltsev, S.; Sokolov, A. P.; Fant, C.; Chen, H.; Gao, X.; Schlosser, C. A.; Scott, J. R.; Dutkiewicz, S.; Ejaz, Q.; Couzo, E. A.; Prinn, R. G.; Haigh, M.

    2017-12-01

    Estimates of physical and economic impacts of future climate change are subject to substantial challenges. To enrich the currently popular approaches of assessing climate impacts by evaluating a damage function or by multi-model comparisons based on the Representative Concentration Pathways (RCPs), we focus here on integrating impacts into a self-consistent coupled human and Earth system modeling framework that includes modules that represent multiple physical impacts. In a sample application we show that this framework is capable of investigating the physical impacts of climate change and socio-economic stressors. The projected climate impacts vary dramatically across the globe in a set of scenarios with global mean warming ranging between 2.4°C and 3.6°C above pre-industrial by 2100. Unabated emissions lead to substantial sea level rise, acidification that impacts the base of the oceanic food chain, air pollution that exceeds health standards by tenfold, water stress that impacts an additional 1 to 2 billion people globally and agricultural productivity that decreases substantially in many parts of the world. We compare the outcomes from these forward-looking scenarios against the common goal described by the target-driven scenario of 2°C, which results in much smaller impacts. It is challenging for large internationally coordinated exercises to respond quickly to new policy targets. We propose that a paradigm shift toward a self-consistent modeling framework to assess climate impacts is needed to produce information relevant to evolving global climate policy and mitigation strategies in a timely way.

  13. Consistent partnership formation: application to a sexually transmitted disease model.

    Science.gov (United States)

    Artzrouni, Marc; Deuchert, Eva

    2012-02-01

    We apply a consistent sexual partnership formation model which hinges on the assumption that one gender's choices drives the process (male or female dominant model). The other gender's behavior is imputed. The model is fitted to UK sexual behavior data and applied to a simple incidence model of HSV-2. With a male dominant model (which assumes accurate male reports on numbers of partners) the modeled incidences of HSV-2 are 77% higher for men and 50% higher for women than with a female dominant model (which assumes accurate female reports). Although highly stylized, our simple incidence model sheds light on the inconsistent results one can obtain with misreported data on sexual activity and age preferences. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Self-consistent approach for neutral community models with speciation

    Science.gov (United States)

    Haegeman, Bart; Etienne, Rampal S.

    2010-03-01

    Hubbell’s neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is particularly simple, describing speciation as a point-mutation event in a birth of a single individual. The stationary species abundance distribution of the basic model, which can be solved exactly, fits empirical data of distributions of species’ abundances surprisingly well. More realistic speciation models have been proposed such as the random-fission model in which new species appear by splitting up existing species. However, no analytical solution is available for these models, impeding quantitative comparison with data. Here, we present a self-consistent approximation method for neutral community models with various speciation modes, including random fission. We derive explicit formulas for the stationary species abundance distribution, which agree very well with simulations. We expect that our approximation method will be useful to study other speciation processes in neutral community models as well.

  15. Self-consistent modeling of electron cyclotron resonance ion sources

    International Nuclear Information System (INIS)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lecot, C.

    2004-01-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally

  16. Self-consistent modeling of electron cyclotron resonance ion sources

    Science.gov (United States)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lécot, C.

    2004-05-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally.

  17. An Improved Cognitive Model of the Iowa and Soochow Gambling Tasks With Regard to Model Fitting Performance and Tests of Parameter Consistency

    Directory of Open Access Journals (Sweden)

    Junyi eDai

    2015-03-01

    Full Text Available The Iowa Gambling Task (IGT and the Soochow Gambling Task (SGT are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning model (EVL and the prospect valence learning model (PVL, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79 and 27 control participants (mean age 35; SD 10.44 completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.

  18. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    Science.gov (United States)

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  19. Self-Consistent Generation of Primordial Continental Crust in Global Mantle Convection Models

    Science.gov (United States)

    Jain, C.; Rozel, A.; Tackley, P. J.

    2017-12-01

    We present the generation of primordial continental crust (TTG rocks) using self-consistent and evolutionary thermochemical mantle convection models (Tackley, PEPI 2008). Numerical modelling commonly shows that mantle convection and continents have strong feedbacks on each other. However in most studies, continents are inserted a priori while basaltic (oceanic) crust is generated self-consistently in some models (Lourenco et al., EPSL 2016). Formation of primordial continental crust happened by fractional melting and crystallisation in episodes of relatively rapid growth from late Archean to late Proterozoic eras (3-1 Ga) (Hawkesworth & Kemp, Nature 2006) and it has also been linked to the onset of plate tectonics around 3 Ga. It takes several stages of differentiation to generate Tonalite-Trondhjemite-Granodiorite (TTG) rocks or proto-continents. First, the basaltic magma is extracted from the pyrolitic mantle which is both erupted at the surface and intruded at the base of the crust. Second, it goes through eclogitic transformation and then partially melts to form TTGs (Rudnick, Nature 1995; Herzberg & Rudnick, Lithos 2012). TTGs account for the majority of the Archean continental crust. Based on the melting conditions proposed by Moyen (Lithos 2011), the feasibility of generating TTG rocks in numerical simulations has already been demonstrated by Rozel et al. (Nature, 2017). Here, we have developed the code further by parameterising TTG formation. We vary the ratio of intrusive (plutonic) and extrusive (volcanic) magmatism (Crisp, Volcanol. Geotherm. 1984) to study the relative volumes of three petrological TTG compositions as reported from field data (Moyen, Lithos 2011). Furthermore, we systematically vary parameters such as friction coefficient, initial core temperature and composition-dependent viscosity to investigate the global tectonic regime of early Earth. Continental crust can also be destroyed by subduction or delamination. We will investigate

  20. Mean-field theory and self-consistent dynamo modeling

    International Nuclear Information System (INIS)

    Yoshizawa, Akira; Yokoi, Nobumitsu

    2001-12-01

    Mean-field theory of dynamo is discussed with emphasis on the statistical formulation of turbulence effects on the magnetohydrodynamic equations and the construction of a self-consistent dynamo model. The dynamo mechanism is sought in the combination of the turbulent residual-helicity and cross-helicity effects. On the basis of this mechanism, discussions are made on the generation of planetary magnetic fields such as geomagnetic field and sunspots and on the occurrence of flow by magnetic fields in planetary and fusion phenomena. (author)

  1. Classical and Quantum Consistency of the DGP Model

    CERN Document Server

    Nicolis, A; Nicolis, Alberto; Rattazzi, Riccardo

    2004-01-01

    We study the Dvali-Gabadadze-Porrati model by the method of the boundary effective action. The truncation of this action to the bending mode \\pi consistently describes physics in a wide range of regimes both at the classical and at the quantum level. The Vainshtein effect, which restores agreement with precise tests of general relativity, follows straightforwardly. We give a simple and general proof of stability, i.e. absence of ghosts in the fluctuations, valid for most of the relevant cases, like for instance the spherical source in asymptotically flat space. However we confirm that around certain interesting self-accelerating cosmological solutions there is a ghost. We consider the issue of quantum corrections. Around flat space \\pi becomes strongly coupled below a macroscopic length of 1000 km, thus impairing the predictivity of the model. Indeed the tower of higher dimensional operators which is expected by a generic UV completion of the model limits predictivity at even larger length scales. We outline ...

  2. Consistency maintenance for constraint in role-based access control model

    Institute of Scientific and Technical Information of China (English)

    韩伟力; 陈刚; 尹建伟; 董金祥

    2002-01-01

    Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces corresponding formal rules, rule-based reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally, the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-oriented product data management (PDM) system.

  3. Consistency maintenance for constraint in role-based access control model

    Institute of Scientific and Technical Information of China (English)

    韩伟力; 陈刚; 尹建伟; 董金祥

    2002-01-01

    Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far'few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces correaponding formal rules, rulebased reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally,the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-ori-ented product data management (PDM) system.

  4. A Theoretically Consistent Framework for Modelling Lagrangian Particle Deposition in Plant Canopies

    Science.gov (United States)

    Bailey, Brian N.; Stoll, Rob; Pardyjak, Eric R.

    2018-06-01

    We present a theoretically consistent framework for modelling Lagrangian particle deposition in plant canopies. The primary focus is on describing the probability of particles encountering canopy elements (i.e., potential deposition), and provides a consistent means for including the effects of imperfect deposition through any appropriate sub-model for deposition efficiency. Some aspects of the framework draw upon an analogy to radiation propagation through a turbid medium with which to develop model theory. The present method is compared against one of the most commonly used heuristic Lagrangian frameworks, namely that originally developed by Legg and Powell (Agricultural Meteorology, 1979, Vol. 20, 47-67), which is shown to be theoretically inconsistent. A recommendation is made to discontinue the use of this heuristic approach in favour of the theoretically consistent framework developed herein, which is no more difficult to apply under equivalent assumptions. The proposed framework has the additional advantage that it can be applied to arbitrary canopy geometries given readily measurable parameters describing vegetation structure.

  5. Final Report Fermionic Symmetries and Self consistent Shell Model

    International Nuclear Information System (INIS)

    Zamick, Larry

    2008-01-01

    In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with 'anomoulous' magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them. The importance of a self consistent shell model was emphasized.

  6. Duchenne muscular dystrophy models show their age

    OpenAIRE

    Chamberlain, Jeffrey S.

    2010-01-01

    The lack of appropriate animal models has hampered efforts to develop therapies for Duchenne muscular dystrophy (DMD). A new mouse model lacking both dystrophin and telomerase (Sacco et al., 2010) closely mimics the pathological progression of human DMD and shows that muscle stem cell activity is a key determinant of disease severity.

  7. A model for cytoplasmic rheology consistent with magnetic twisting cytometry.

    Science.gov (United States)

    Butler, J P; Kelly, S M

    1998-01-01

    Magnetic twisting cytometry is gaining wide applicability as a tool for the investigation of the rheological properties of cells and the mechanical properties of receptor-cytoskeletal interactions. Current technology involves the application and release of magnetically induced torques on small magnetic particles bound to or inside cells, with measurements of the resulting angular rotation of the particles. The properties of purely elastic or purely viscous materials can be determined by the angular strain and strain rate, respectively. However, the cytoskeleton and its linkage to cell surface receptors display elastic, viscous, and even plastic deformation, and the simultaneous characterization of these properties using only elastic or viscous models is internally inconsistent. Data interpretation is complicated by the fact that in current technology, the applied torques are not constant in time, but decrease as the particles rotate. This paper describes an internally consistent model consisting of a parallel viscoelastic element in series with a parallel viscoelastic element, and one approach to quantitative parameter evaluation. The unified model reproduces all essential features seen in data obtained from a wide variety of cell populations, and contains the pure elastic, viscoelastic, and viscous cases as subsets.

  8. New geometric design consistency model based on operating speed profiles for road safety evaluation.

    Science.gov (United States)

    Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo

    2013-12-01

    To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Behavioral Consistency of C and Verilog Programs Using Bounded Model Checking

    National Research Council Canada - National Science Library

    Clarke, Edmund; Kroening, Daniel; Yorav, Karen

    2003-01-01

    .... We describe experimental results on various reactive present an algorithm that checks behavioral consistency between an ANSI-C program and a circuit given in Verilog using Bounded Model Checking...

  10. A consistency assessment of coupled cohesive zone models for mixed-mode debonding problems

    Directory of Open Access Journals (Sweden)

    R. Dimitri

    2014-07-01

    Full Text Available Due to their simplicity, cohesive zone models (CZMs are very attractive to describe mixed-mode failure and debonding processes of materials and interfaces. Although a large number of coupled CZMs have been proposed, and despite the extensive related literature, little attention has been devoted to ensuring the consistency of these models for mixed-mode conditions, primarily in a thermodynamical sense. A lack of consistency may affect the local or global response of a mechanical system. This contribution deals with the consistency check for some widely used exponential and bilinear mixed-mode CZMs. The coupling effect on stresses and energy dissipation is first investigated and the path-dependance of the mixed-mode debonding work of separation is analitically evaluated. Analytical predictions are also compared with results from numerical implementations, where the interface is described with zero-thickness contact elements. A node-to-segment strategy is here adopted, which incorporates decohesion and contact within a unified framework. A new thermodynamically consistent mixed-mode CZ model based on a reformulation of the Xu-Needleman model as modified by van den Bosch et al. is finally proposed and derived by applying the Coleman and Noll procedure in accordance with the second law of thermodynamics. The model holds monolithically for loading and unloading processes, as well as for decohesion and contact, and its performance is demonstrated through suitable examples.

  11. A new self-consistent model for thermodynamics of binary solutions

    Czech Academy of Sciences Publication Activity Database

    Svoboda, Jiří; Shan, Y. V.; Fischer, F. D.

    2015-01-01

    Roč. 108, NOV (2015), s. 27-30 ISSN 1359-6462 R&D Projects: GA ČR(CZ) GA14-24252S Institutional support: RVO:68081723 Keywords : Thermodynamics * Analytical methods * CALPHAD * Phase diagram * Self-consistent model Subject RIV: BJ - Thermodynamics Impact factor: 3.305, year: 2015

  12. A comprehensive, consistent and systematic mathematical model of PEM fuel cells

    International Nuclear Information System (INIS)

    Baschuk, J.J.; Li Xianguo

    2009-01-01

    This paper presents a comprehensive, consistent and systematic mathematical model for PEM fuel cells that can be used as the general formulation for the simulation and analysis of PEM fuel cells. As an illustration, the model is applied to an isothermal, steady state, two-dimensional PEM fuel cell. Water is assumed to be in either the gas phase or as a liquid phase in the pores of the polymer electrolyte. The model includes the transport of gas in the gas flow channels, electrode backing and catalyst layers; the transport of water and hydronium in the polymer electrolyte of the catalyst and polymer electrolyte layers; and the transport of electrical current in the solid phase. Water and ion transport in the polymer electrolyte was modeled using the generalized Stefan-Maxwell equations, based on non-equilibrium thermodynamics. Model simulations show that the bulk, convective gas velocity facilitates hydrogen transport from the gas flow channels to the anode catalyst layers, but inhibits oxygen transport. While some of the water required by the anode is supplied by the water produced in the cathode, the majority of water must be supplied by the anode gas phase, making operation with fully humidified reactants necessary. The length of the gas flow channel has a significant effect on the current production of the PEM fuel cell, with a longer channel length having a lower performance relative to a shorter channel length. This lower performance is caused by a greater variation in water content within the longer channel length

  13. A consistent modelling methodology for secondary settling tanks in wastewater treatment.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Nopens, Ingmar

    2011-03-01

    The aim of this contribution is partly to build consensus on a consistent modelling methodology (CMM) of complex real processes in wastewater treatment by combining classical concepts with results from applied mathematics, and partly to apply it to the clarification-thickening process in the secondary settling tank. In the CMM, the real process should be approximated by a mathematical model (process model; ordinary or partial differential equation (ODE or PDE)), which in turn is approximated by a simulation model (numerical method) implemented on a computer. These steps have often not been carried out in a correct way. The secondary settling tank was chosen as a case since this is one of the most complex processes in a wastewater treatment plant and simulation models developed decades ago have no guarantee of satisfying fundamental mathematical and physical properties. Nevertheless, such methods are still used in commercial tools to date. This particularly becomes of interest as the state-of-the-art practice is moving towards plant-wide modelling. Then all submodels interact and errors propagate through the model and severely hamper any calibration effort and, hence, the predictive purpose of the model. The CMM is described by applying it first to a simple conversion process in the biological reactor yielding an ODE solver, and then to the solid-liquid separation in the secondary settling tank, yielding a PDE solver. Time has come to incorporate established mathematical techniques into environmental engineering, and wastewater treatment modelling in particular, and to use proven reliable and consistent simulation models. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Detecting consistent patterns of directional adaptation using differential selection codon models.

    Science.gov (United States)

    Parto, Sahar; Lartillot, Nicolas

    2017-06-23

    Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.

  15. Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.

    2003-01-01

    The amount of data available on the Internet is continuously increasing, consequentially there is a growing need for tools that help to analyse the data. Testing of consistency among data received from different sources is made difficult by the number of different languages and schemas being used....... In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling......, an important part of this technique is attaching of OCL expressions to special boolean class attributes that we call consistency attributes. The resulting integration model can be used for automatic consistency testing of two instances of the legacy models by automatically instantiate the whole integration...

  16. Single, Complete, Probability Spaces Consistent With EPR-Bohm-Bell Experimental Data

    Science.gov (United States)

    Avis, David; Fischer, Paul; Hilbert, Astrid; Khrennikov, Andrei

    2009-03-01

    We show that paradoxical consequences of violations of Bell's inequality are induced by the use of an unsuitable probabilistic description for the EPR-Bohm-Bell experiment. The conventional description (due to Bell) is based on a combination of statistical data collected for different settings of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory.

  17. Self-consistent model calculations of the ordered S-matrix and the cylinder correction

    International Nuclear Information System (INIS)

    Millan, J.

    1977-11-01

    The multiperipheral ordered bootstrap of Rosenzweig and Veneziano is studied by using dual triple Regge couplings exhibiting the required threshold behavior. In the interval -0.5 less than or equal to t less than or equal to 0.8 GeV 2 self-consistent reggeon couplings and propagators are obtained for values of Regge slopes and intercepts consistent with the physical values for the leading natural-parity Regge trajectories. Cylinder effects on planar pole positions and couplings are calculated. By use of an unsymmetrical planar π--rho reggeon loop model, self-consistent solutions are obtained for the unnatural-parity mesons in the interval -0.5 less than or equal to t less than or equal to 0.6 GeV 2 . The effects of other Regge poles being neglected, the model gives a value of the π--eta splitting consistent with experiment. 24 figures, 1 table, 25 references

  18. Thermodynamically consistent model of brittle oil shales under overpressure

    Science.gov (United States)

    Izvekov, Oleg

    2016-04-01

    The concept of dual porosity is a common way for simulation of oil shale production. In the frame of this concept the porous fractured media is considered as superposition of two permeable continua with mass exchange. As a rule the concept doesn't take into account such as the well-known phenomenon as slip along natural fractures, overpressure in low permeability matrix and so on. Overpressure can lead to development of secondary fractures in low permeability matrix in the process of drilling and pressure reduction during production. In this work a new thermodynamically consistent model which generalizes the model of dual porosity is proposed. Particularities of the model are as follows. The set of natural fractures is considered as permeable continuum. Damage mechanics is applied to simulation of secondary fractures development in low permeability matrix. Slip along natural fractures is simulated in the frame of plasticity theory with Drucker-Prager criterion.

  19. Creation of Consistent Burn Wounds: A Rat Model

    Directory of Open Access Journals (Sweden)

    Elijah Zhengyang Cai

    2014-07-01

    Full Text Available Background Burn infliction techniques are poorly described in rat models. An accurate study can only be achieved with wounds that are uniform in size and depth. We describe a simple reproducible method for creating consistent burn wounds in rats. Methods Ten male Sprague-Dawley rats were anesthetized and dorsum shaved. A 100 g cylindrical stainless-steel rod (1 cm diameter was heated to 100℃ in boiling water. Temperature was monitored using a thermocouple. We performed two consecutive toe-pinch tests on different limbs to assess the depth of sedation. Burn infliction was limited to the loin. The skin was pulled upwards, away from the underlying viscera, creating a flat surface. The rod rested on its own weight for 5, 10, and 20 seconds at three different sites on each rat. Wounds were evaluated for size, morphology and depth. Results Average wound size was 0.9957 cm2 (standard deviation [SD] 0.1845 (n=30. Wounds created with duration of 5 seconds were pale, with an indistinct margin of erythema. Wounds of 10 and 20 seconds were well-defined, uniformly brown with a rim of erythema. Average depths of tissue damage were 1.30 mm (SD 0.424, 2.35 mm (SD 0.071, and 2.60 mm (SD 0.283 for duration of 5, 10, 20 seconds respectively. Burn duration of 5 seconds resulted in full-thickness damage. Burn duration of 10 seconds and 20 seconds resulted in full-thickness damage, involving subjacent skeletal muscle. Conclusions This is a simple reproducible method for creating burn wounds consistent in size and depth in a rat burn model.

  20. A consistent transported PDF model for treating differential molecular diffusion

    Science.gov (United States)

    Wang, Haifeng; Zhang, Pei

    2016-11-01

    Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.

  1. Reporting consistently on CSR

    DEFF Research Database (Denmark)

    Thomsen, Christa; Nielsen, Anne Ellerup

    2006-01-01

    This chapter first outlines theory and literature on CSR and Stakeholder Relations focusing on the different perspectives and the contextual and dynamic character of the CSR concept. CSR reporting challenges are discussed and a model of analysis is proposed. Next, our paper presents the results...... of a case study showing that companies use different and not necessarily consistent strategies for reporting on CSR. Finally, the implications for managerial practice are discussed. The chapter concludes by highlighting the value and awareness of the discourse and the discourse types adopted...... in the reporting material. By implementing consistent discourse strategies that interact according to a well-defined pattern or order, it is possible to communicate a strong social commitment on the one hand, and to take into consideration the expectations of the shareholders and the other stakeholders...

  2. A self-consistent model of an isothermal tokamak

    Science.gov (United States)

    McNamara, Steven; Lilley, Matthew

    2014-10-01

    Continued progress in liquid lithium coating technologies have made the development of a beam driven tokamak with minimal edge recycling a feasibly possibility. Such devices are characterised by improved confinement due to their inherent stability and the suppression of thermal conduction. Particle and energy confinement become intrinsically linked and the plasma thermal energy content is governed by the injected beam. A self-consistent model of a purely beam fuelled isothermal tokamak is presented, including calculations of the density profile, bulk species temperature ratios and the fusion output. Stability considerations constrain the operating parameters and regions of stable operation are identified and their suitability to potential reactor applications discussed.

  3. Consistent momentum space regularization/renormalization of supersymmetric quantum field theories: the three-loop β-function for the Wess-Zumino model

    International Nuclear Information System (INIS)

    Carneiro, David; Sampaio, Marcos; Nemes, Maria Carolina; Scarpelli, Antonio Paulo Baeta

    2003-01-01

    We compute the three loop β function of the Wess-Zumino model to motivate implicit regularization (IR) as a consistent and practical momentum-space framework to study supersymmetric quantum field theories. In this framework which works essentially in the physical dimension of the theory we show that ultraviolet are clearly disentangled from infrared divergences. We obtain consistent results which motivate the method as a good choice to study supersymmetry anomalies in quantum field theories. (author)

  4. Towards an Information Model of Consistency Maintenance in Distributed Interactive Applications

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2008-01-01

    Full Text Available A novel framework to model and explore predictive contract mechanisms in distributed interactive applications (DIAs using information theory is proposed. In our model, the entity state update scheme is modelled as an information generation, encoding, and reconstruction process. Such a perspective facilitates a quantitative measurement of state fidelity loss as a result of the distribution protocol. Results from an experimental study on a first-person shooter game are used to illustrate the utility of this measurement process. We contend that our proposed model is a starting point to reframe and analyse consistency maintenance in DIAs as a problem in distributed interactive media compression.

  5. A Self-Consistent Fault Slip Model for the 2011 Tohoku Earthquake and Tsunami

    Science.gov (United States)

    Yamazaki, Yoshiki; Cheung, Kwok Fai; Lay, Thorne

    2018-02-01

    The unprecedented geophysical and hydrographic data sets from the 2011 Tohoku earthquake and tsunami have facilitated numerous modeling and inversion analyses for a wide range of dislocation models. Significant uncertainties remain in the slip distribution as well as the possible contribution of tsunami excitation from submarine slumping or anelastic wedge deformation. We seek a self-consistent model for the primary teleseismic and tsunami observations through an iterative approach that begins with downsampling of a finite fault model inverted from global seismic records. Direct adjustment of the fault displacement guided by high-resolution forward modeling of near-field tsunami waveform and runup measurements improves the features that are not satisfactorily accounted for by the seismic wave inversion. The results show acute sensitivity of the runup to impulsive tsunami waves generated by near-trench slip. The adjusted finite fault model is able to reproduce the DART records across the Pacific Ocean in forward modeling of the far-field tsunami as well as the global seismic records through a finer-scale subfault moment- and rake-constrained inversion, thereby validating its ability to account for the tsunami and teleseismic observations without requiring an exotic source. The upsampled final model gives reasonably good fits to onshore and offshore geodetic observations albeit early after-slip effects and wedge faulting that cannot be reliably accounted for. The large predicted slip of over 20 m at shallow depth extending northward to 39.7°N indicates extensive rerupture and reduced seismic hazard of the 1896 tsunami earthquake zone, as inferred to varying extents by several recent joint and tsunami-only inversions.

  6. Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    Science.gov (United States)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.

    2017-01-01

    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.

  7. "A Simplified 'Benchmark” Stock-flow Consistent (SFC) Post-Keynesian Growth Model"

    OpenAIRE

    Claudio H. Dos Santos; Gennaro Zezza

    2007-01-01

    Despite being arguably one of the most active areas of research in heterodox macroeconomics, the study of the dynamic properties of stock-flow consistent (SFC) growth models of financially sophisticated economies is still in its early stages. This paper attempts to offer a contribution to this line of research by presenting a simplified Post-Keynesian SFC growth model with well-defined dynamic properties, and using it to shed light on the merits and limitations of the current heterodox SFC li...

  8. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    Monte Carlo sampling schemes of available evaluation methods. The second improvement concerns Bayesian evaluation methods based on a certain simplification of the nuclear model. These methods were restricted to the consistent evaluation of tens of thousands of observables. In this thesis, a new evaluation scheme has been developed, which is mathematically equivalent to existing methods, but allows the consistent evaluation of dozens of millions of observables. The new scheme is suited for the implementation as a database application. The realization of such an application with public access can help to accelerate the production of reliable nuclear data sets. Furthermore, in combination with the novel treatment of model deficiencies, problems of the model and the experimental data can be tracked down without user interaction. This feature can foster the development of nuclear models with high predictive power. (author) [de

  9. Thermodynamically consistent mesoscopic model of the ferro/paramagnetic transition

    Czech Academy of Sciences Publication Activity Database

    Benešová, Barbora; Kružík, Martin; Roubíček, Tomáš

    2013-01-01

    Roč. 64, Č. 1 (2013), s. 1-28 ISSN 0044-2275 R&D Projects: GA AV ČR IAA100750802; GA ČR GA106/09/1573; GA ČR GAP201/10/0357 Grant - others:GA ČR(CZ) GA106/08/1397; GA MŠk(CZ) LC06052 Program:GA; LC Institutional support: RVO:67985556 Keywords : ferro-para-magnetism * evolution * thermodynamics Subject RIV: BA - General Mathematics; BA - General Mathematics (UT-L) Impact factor: 1.214, year: 2013 http://library.utia.cas.cz/separaty/2012/MTR/kruzik-thermodynamically consistent mesoscopic model of the ferro-paramagnetic transition.pdf

  10. Self-consistent imbedding and the ellipsoidal model model for porous rocks

    International Nuclear Information System (INIS)

    Korringa, J.; Brown, R.J.S.; Thompson, D.D.; Runge, R.J.

    1979-01-01

    Equations are obtained for the effective elastic moduli for a model of an isotropic, heterogeneous, porous medium. The mathematical model used for computation is abstract in that it is not simply a rigorous computation for a composite medium of some idealized geometry, although the computation contains individual steps which are just that. Both the solid part and pore space are represented by ellipsoidal or spherical 'grains' or 'pores' of various sizes and shapes. The strain of each grain, caused by external forces applied to the medium, is calculated in a self-consistent imbedding (SCI) approximation, which replaces the true surrounding of any given grain or pore by an isotropic medium defined by the effective moduli to be computed. The ellipsoidal nature of the shapes allows us to use Eshelby's theoretical treatment of a single ellipsoidal inclusion in an infiinte homogeneous medium. Results are compared with the literature, and discrepancies are found with all published accounts of this problem. Deviations from the work of Wu, of Walsh, and of O'Connell and Budiansky are attributed to a substitution made by these authors which though an identity for the exact quantities involved, is only approximate in the SCI calculation. This reduces the validity of the equations to first-order effects only. Differences with the results of Kuster and Toksoez are attributed to the fact that the computation of these authors is not self-consistent in the sense used here. A result seems to be the stiffening of the medium as if the pores are held apart. For spherical grains and pores, their calculated moduli are those given by the Hashin-Shtrikman upper bounds. Our calculation reproduces, in the case of spheres, an early result of Budiansky. An additional feature of our work is that the algebra is simpler than in earlier work. We also incorporate into the theory the possibility that fluid-filled pores are interconnected

  11. Volume of the steady-state space of financial flows in a monetary stock-flow-consistent model

    Science.gov (United States)

    Hazan, Aurélien

    2017-05-01

    We show that a steady-state stock-flow consistent macro-economic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to study the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.

  12. Towards a consistent geochemical model for prediction of uranium(VI) removal from groundwater by ferrihydrite

    International Nuclear Information System (INIS)

    Gustafsson, Jon Petter; Daessman, Ellinor; Baeckstroem, Mattias

    2009-01-01

    Uranium(VI), which is often elevated in granitoidic groundwaters, is known to adsorb strongly to Fe (hydr)oxides under certain conditions. This process can be used in water treatment to remove U(VI). To develop a consistent geochemical model for U(VI) adsorption to ferrihydrite, batch experiments were performed and previous data sets reviewed to optimize a set of surface complexation constants using the 3-plane CD-MUSIC model. To consider the effect of dissolved organic matter (DOM) on U(VI) speciation, new parameters for the Stockholm Humic Model (SHM) were optimized using previously published data. The model, which was constrained from available X-ray absorption fine structure (EXAFS) spectroscopy evidence, fitted the data well when the surface sites were divided into low- and high-affinity binding sites. Application of the model concept to other published data sets revealed differences in the reactivity of different ferrihydrites towards U(VI). Use of the optimized SHM parameters for U(VI)-DOM complexation showed that this process is important for U(VI) speciation at low pH. However in neutral to alkaline waters with substantial carbonate present, Ca-U-CO 3 complexes predominate. The calibrated geochemical model was used to simulate U(VI) adsorption to ferrihydrite for a hypothetical groundwater in the presence of several competitive ions. The results showed that U(VI) adsorption was strong between pH 5 and 8. Also near the calcite saturation limit, where U(VI) adsorption was weakest according to the model, the adsorption percentage was predicted to be >80%. Hence U(VI) adsorption to ferrihydrite-containing sorbents may be used as a method to bring down U(VI) concentrations to acceptable levels in groundwater

  13. Modeling Patient No-Show History and Predicting Future Outpatient Appointment Behavior in the Veterans Health Administration.

    Science.gov (United States)

    Goffman, Rachel M; Harris, Shannon L; May, Jerrold H; Milicevic, Aleksandra S; Monte, Robert J; Myaskovsky, Larissa; Rodriguez, Keri L; Tjader, Youxu C; Vargas, Dominic L

    2017-05-01

    Missed appointments reduce the efficiency of the health care system and negatively impact access to care for all patients. Identifying patients at risk for missing an appointment could help health care systems and providers better target interventions to reduce patient no-shows. Our aim was to develop and test a predictive model that identifies patients that have a high probability of missing their outpatient appointments. Demographic information, appointment characteristics, and attendance history were drawn from the existing data sets from four Veterans Affairs health care facilities within six separate service areas. Past attendance behavior was modeled using an empirical Markov model based on up to 10 previous appointments. Using logistic regression, we developed 24 unique predictive models. We implemented the models and tested an intervention strategy using live reminder calls placed 24, 48, and 72 hours ahead of time. The pilot study targeted 1,754 high-risk patients, whose probability of missing an appointment was predicted to be at least 0.2. Our results indicate that three variables were consistently related to a patient's no-show probability in all 24 models: past attendance behavior, the age of the appointment, and having multiple appointments scheduled on that day. After the intervention was implemented, the no-show rate in the pilot group was reduced from the expected value of 35% to 12.16% (p value < 0.0001). The predictive model accurately identified patients who were more likely to miss their appointments. Applying the model in practice enables clinics to apply more intensive intervention measures to high-risk patients. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  14. A self-consistent model for polycrystal deformation. Description and implementation

    International Nuclear Information System (INIS)

    Clausen, B.; Lorentzen, T.

    1997-04-01

    This report is a manual for the ANSI C implementation of an incremental elastic-plastic rate-insensitive self-consistent polycrystal deformation model based on (Hutchinson 1970). The model is furthermore described in the Ph.D. thesis by Clausen (Clausen 1997). The structure of the main program, sc m odel.c, and its subroutines are described with flow-charts. Likewise the pre-processor, sc i ni.c, is described with a flowchart. Default values of all the input parameters are given in the pre-processor, but the user is able to select from other pre-defined values or enter new values. A sample calculation is made and the results are presented as plots and examples of the output files are shown. (au) 4 tabs., 28 ills., 17 refs

  15. Self-consistent modelling of resonant tunnelling structures

    DEFF Research Database (Denmark)

    Fiig, T.; Jauho, A.P.

    1992-01-01

    We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated...... applied voltages and carrier densities at the emitter-barrier interface. We include the two-dimensional accumulation layer charge and the quantum well charge in our self-consistent scheme. We discuss the evaluation of the current contribution originating from the two-dimensional accumulation layer charges......, and our qualitative estimates seem consistent with recent experimental studies. The intrinsic bistability of resonant tunnelling diodes is analyzed within several different approximation schemes....

  16. Consistency analysis of subspace identification methods based on a linear regression approach

    DEFF Research Database (Denmark)

    Knudsen, Torben

    2001-01-01

    In the literature results can be found which claim consistency for the subspace method under certain quite weak assumptions. Unfortunately, a new result gives a counter example showing inconsistency under these assumptions and then gives new more strict sufficient assumptions which however does n...... not include important model structures as e.g. Box-Jenkins. Based on a simple least squares approach this paper shows the possible inconsistency under the weak assumptions and develops only slightly stricter assumptions sufficient for consistency and which includes any model structure...

  17. Overlap function and Regge cut in a self-consistent multi-Regge model

    International Nuclear Information System (INIS)

    Banerjee, H.; Mallik, S.

    1977-01-01

    A self-consistent multi-Regge model with unit intercept for the input trajectory is presented. Violation of unitarity is avoided in the model by assuming the vanishing of the pomeron-pomeron-hadron vertex, as the mass of either pomeron tends to zero. The model yields an output Regge pole in the inelastic overlap function which for t>0 lies on the r.h.s. of the moving branch point in the complex J-plane, but for t<0 moves to unphysical sheets. The leading Regge-cut contribution to the forward diffraction amplitude can be negative, so that the total cross section predicted by the model attains a limiting value from below

  18. Overlap function and Regge cut in a self-consistent multi-Regge model

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, H [Saha Inst. of Nuclear Physics, Calcutta (India); Mallik, S [Bern Univ. (Switzerland). Inst. fuer Theoretische Physik

    1977-04-21

    A self-consistent multi-Regge model with unit intercept for the input trajectory is presented. Violation of unitarity is avoided in the model by assuming the vanishing of the pomeron-pomeron-hadron vertex, as the mass of either pomeron tends to zero. The model yields an output Regge pole in the inelastic overlap function which for t>0 lies on the r.h.s. of the moving branch point in the complex J-plane, but for t<0 moves to unphysical sheets. The leading Regge-cut contribution to the forward diffraction amplitude can be negative, so that the total cross section predicted by the model attains a limiting value from below.

  19. Consistency of color representation in smart phones.

    Science.gov (United States)

    Dain, Stephen J; Kwan, Benjamin; Wong, Leslie

    2016-03-01

    One of the barriers to the construction of consistent computer-based color vision tests has been the variety of monitors and computers. Consistency of color on a variety of screens has necessitated calibration of each setup individually. Color vision examination with a carefully controlled display has, as a consequence, been a laboratory rather than a clinical activity. Inevitably, smart phones have become a vehicle for color vision tests. They have the advantage that the processor and screen are associated and there are fewer models of smart phones than permutations of computers and monitors. Colorimetric consistency of display within a model may be a given. It may extend across models from the same manufacturer but is unlikely to extend between manufacturers especially where technologies vary. In this study, we measured the same set of colors in a JPEG file displayed on 11 samples of each of four models of smart phone (iPhone 4s, iPhone5, Samsung Galaxy S3, and Samsung Galaxy S4) using a Photo Research PR-730. The iPhones are white LED backlit LCD and the Samsung are OLEDs. The color gamut varies between models and comparison with sRGB space shows 61%, 85%, 117%, and 110%, respectively. The iPhones differ markedly from the Samsungs and from one another. This indicates that model-specific color lookup tables will be needed. Within each model, the primaries were quite consistent (despite the age of phone varying within each sample). The worst case in each model was the blue primary; the 95th percentile limits in the v' coordinate were ±0.008 for the iPhone 4 and ±0.004 for the other three models. The u'v' variation in white points was ±0.004 for the iPhone4 and ±0.002 for the others, although the spread of white points between models was u'v'±0.007. The differences are essentially the same for primaries at low luminance. The variation of colors intermediate between the primaries (e.g., red-purple, orange) mirror the variation in the primaries. The variation in

  20. Comment on self-consistent model of black hole formation and evaporation

    International Nuclear Information System (INIS)

    Ho, Pei-Ming

    2015-01-01

    In an earlier work, Kawai et al. proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.

  1. Consistency of climate change projections from multiple global and regional model intercomparison projects

    Science.gov (United States)

    Fernández, J.; Frías, M. D.; Cabos, W. D.; Cofiño, A. S.; Domínguez, M.; Fita, L.; Gaertner, M. A.; García-Díez, M.; Gutiérrez, J. M.; Jiménez-Guerrero, P.; Liguori, G.; Montávez, J. P.; Romera, R.; Sánchez, E.

    2018-03-01

    We present an unprecedented ensemble of 196 future climate projections arising from different global and regional model intercomparison projects (MIPs): CMIP3, CMIP5, ENSEMBLES, ESCENA, EURO- and Med-CORDEX. This multi-MIP ensemble includes all regional climate model (RCM) projections publicly available to date, along with their driving global climate models (GCMs). We illustrate consistent and conflicting messages using continental Spain and the Balearic Islands as target region. The study considers near future (2021-2050) changes and their dependence on several uncertainty sources sampled in the multi-MIP ensemble: GCM, future scenario, internal variability, RCM, and spatial resolution. This initial work focuses on mean seasonal precipitation and temperature changes. The results show that the potential GCM-RCM combinations have been explored very unevenly, with favoured GCMs and large ensembles of a few RCMs that do not respond to any ensemble design. Therefore, the grand-ensemble is weighted towards a few models. The selection of a balanced, credible sub-ensemble is challenged in this study by illustrating several conflicting responses between the RCM and its driving GCM and among different RCMs. Sub-ensembles from different initiatives are dominated by different uncertainty sources, being the driving GCM the main contributor to uncertainty in the grand-ensemble. For this analysis of the near future changes, the emission scenario does not lead to a strong uncertainty. Despite the extra computational effort, for mean seasonal changes, the increase in resolution does not lead to important changes.

  2. Show-Bix &

    DEFF Research Database (Denmark)

    2014-01-01

    The anti-reenactment 'Show-Bix &' consists of 5 dias projectors, a dial phone, quintophonic sound, and interactive elements. A responsive interface will enable the Dias projectors to show copies of original dias slides from the Show-Bix piece ”March på Stedet”, 265 images in total. The copies are...

  3. A self-consistent model for polycrystal deformation. Description and implementation

    Energy Technology Data Exchange (ETDEWEB)

    Clausen, B.; Lorentzen, T.

    1997-04-01

    This report is a manual for the ANSI C implementation of an incremental elastic-plastic rate-insensitive self-consistent polycrystal deformation model based on (Hutchinson 1970). The model is furthermore described in the Ph.D. thesis by Clausen (Clausen 1997). The structure of the main program, sc{sub m}odel.c, and its subroutines are described with flow-charts. Likewise the pre-processor, sc{sub i}ni.c, is described with a flowchart. Default values of all the input parameters are given in the pre-processor, but the user is able to select from other pre-defined values or enter new values. A sample calculation is made and the results are presented as plots and examples of the output files are shown. (au) 4 tabs., 28 ills., 17 refs.

  4. Macroscopic self-consistent model for external-reflection near-field microscopy

    International Nuclear Information System (INIS)

    Berntsen, S.; Bozhevolnaya, E.; Bozhevolnyi, S.

    1993-01-01

    The self-consistent macroscopic approach based on the Maxwell equations in two-dimensional geometry is developed to describe tip-surface interaction in external-reflection near-field microscopy. The problem is reduced to a single one-dimensional integral equation in terms of the Fourier components of the field at the plane of the sample surface. This equation is extended to take into account a pointlike scatterer placed on the sample surface. The power of light propagating toward the detector as the fiber mode is expressed by using the self-consistent field at the tip surface. Numerical results for trapezium-shaped tips are presented. The authors show that the sharper tip and the more confined fiber mode result in better resolution of the near-field microscope. Moreover, it is found that the tip-surface distance should not be too small so that better resolution is ensured. 14 refs., 10 figs

  5. Consistent robustness analysis (CRA) identifies biologically relevant properties of regulatory network models.

    Science.gov (United States)

    Saithong, Treenut; Painter, Kevin J; Millar, Andrew J

    2010-12-16

    A number of studies have previously demonstrated that "goodness of fit" is insufficient in reliably classifying the credibility of a biological model. Robustness and/or sensitivity analysis is commonly employed as a secondary method for evaluating the suitability of a particular model. The results of such analyses invariably depend on the particular parameter set tested, yet many parameter values for biological models are uncertain. Here, we propose a novel robustness analysis that aims to determine the "common robustness" of the model with multiple, biologically plausible parameter sets, rather than the local robustness for a particular parameter set. Our method is applied to two published models of the Arabidopsis circadian clock (the one-loop [1] and two-loop [2] models). The results reinforce current findings suggesting the greater reliability of the two-loop model and pinpoint the crucial role of TOC1 in the circadian network. Consistent Robustness Analysis can indicate both the relative plausibility of different models and also the critical components and processes controlling each model.

  6. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States

    Science.gov (United States)

    Martinez, Guillermo F.; Gupta, Hoshin V.

    2011-12-01

    Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.

  7. Precommitted Investment Strategy versus Time-Consistent Investment Strategy for a Dual Risk Model

    Directory of Open Access Journals (Sweden)

    Lidong Zhang

    2014-01-01

    Full Text Available We are concerned with optimal investment strategy for a dual risk model. We assume that the company can invest into a risk-free asset and a risky asset. Short-selling and borrowing money are allowed. Due to lack of iterated-expectation property, the Bellman Optimization Principle does not hold. Thus we investigate the precommitted strategy and time-consistent strategy, respectively. We take three steps to derive the precommitted investment strategy. Furthermore, the time-consistent investment strategy is also obtained by solving the extended Hamilton-Jacobi-Bellman equations. We compare the precommitted strategy with time-consistent strategy and find that these different strategies have different advantages: the former can make value function maximized at the original time t=0 and the latter strategy is time-consistent for the whole time horizon. Finally, numerical analysis is presented for our results.

  8. Self-consistent modeling of radio-frequency plasma generation in stellarators

    Energy Technology Data Exchange (ETDEWEB)

    Moiseenko, V. E., E-mail: moiseenk@ipp.kharkov.ua; Stadnik, Yu. S., E-mail: stadnikys@kipt.kharkov.ua [National Academy of Sciences of Ukraine, National Science Center Kharkov Institute of Physics and Technology (Ukraine); Lysoivan, A. I., E-mail: a.lyssoivan@fz-juelich.de [Royal Military Academy, EURATOM-Belgian State Association, Laboratory for Plasma Physics (Belgium); Korovin, V. B. [National Academy of Sciences of Ukraine, National Science Center Kharkov Institute of Physics and Technology (Ukraine)

    2013-11-15

    A self-consistent model of radio-frequency (RF) plasma generation in stellarators in the ion cyclotron frequency range is described. The model includes equations for the particle and energy balance and boundary conditions for Maxwell’s equations. The equation of charged particle balance takes into account the influx of particles due to ionization and their loss via diffusion and convection. The equation of electron energy balance takes into account the RF heating power source, as well as energy losses due to the excitation and electron-impact ionization of gas atoms, energy exchange via Coulomb collisions, and plasma heat conduction. The deposited RF power is calculated by solving the boundary problem for Maxwell’s equations. When describing the dissipation of the energy of the RF field, collisional absorption and Landau damping are taken into account. At each time step, Maxwell’s equations are solved for the current profiles of the plasma density and plasma temperature. The calculations are performed for a cylindrical plasma. The plasma is assumed to be axisymmetric and homogeneous along the plasma column. The system of balance equations is solved using the Crank-Nicholson scheme. Maxwell’s equations are solved in a one-dimensional approximation by using the Fourier transformation along the azimuthal and longitudinal coordinates. Results of simulations of RF plasma generation in the Uragan-2M stellarator by using a frame antenna operating at frequencies lower than the ion cyclotron frequency are presented. The calculations show that the slow wave generated by the antenna is efficiently absorbed at the periphery of the plasma column, due to which only a small fraction of the input power reaches the confinement region. As a result, the temperature on the axis of the plasma column remains low, whereas at the periphery it is substantially higher. This leads to strong absorption of the RF field at the periphery via the Landau mechanism.

  9. Self-consistent Bulge/Disk/Halo Galaxy Dynamical Modeling Using Integral Field Kinematics

    Science.gov (United States)

    Taranu, D. S.; Obreschkow, D.; Dubinski, J. J.; Fogarty, L. M. R.; van de Sande, J.; Catinella, B.; Cortese, L.; Moffett, A.; Robotham, A. S. G.; Allen, J. T.; Bland-Hawthorn, J.; Bryant, J. J.; Colless, M.; Croom, S. M.; D'Eugenio, F.; Davies, R. L.; Drinkwater, M. J.; Driver, S. P.; Goodwin, M.; Konstantopoulos, I. S.; Lawrence, J. S.; López-Sánchez, Á. R.; Lorente, N. P. F.; Medling, A. M.; Mould, J. R.; Owers, M. S.; Power, C.; Richards, S. N.; Tonini, C.

    2017-11-01

    We introduce a method for modeling disk galaxies designed to take full advantage of data from integral field spectroscopy (IFS). The method fits equilibrium models to simultaneously reproduce the surface brightness, rotation, and velocity dispersion profiles of a galaxy. The models are fully self-consistent 6D distribution functions for a galaxy with a Sérsic profile stellar bulge, exponential disk, and parametric dark-matter halo, generated by an updated version of GalactICS. By creating realistic flux-weighted maps of the kinematic moments (flux, mean velocity, and dispersion), we simultaneously fit photometric and spectroscopic data using both maximum-likelihood and Bayesian (MCMC) techniques. We apply the method to a GAMA spiral galaxy (G79635) with kinematics from the SAMI Galaxy Survey and deep g- and r-band photometry from the VST-KiDS survey, comparing parameter constraints with those from traditional 2D bulge-disk decomposition. Our method returns broadly consistent results for shared parameters while constraining the mass-to-light ratios of stellar components and reproducing the H I-inferred circular velocity well beyond the limits of the SAMI data. Although the method is tailored for fitting integral field kinematic data, it can use other dynamical constraints like central fiber dispersions and H I circular velocities, and is well-suited for modeling galaxies with a combination of deep imaging and H I and/or optical spectra (resolved or otherwise). Our implementation (MagRite) is computationally efficient and can generate well-resolved models and kinematic maps in under a minute on modern processors.

  10. Validity test and its consistency in the construction of patient loyalty model

    Science.gov (United States)

    Yanuar, Ferra

    2016-04-01

    The main objective of this present study is to demonstrate the estimation of validity values and its consistency based on structural equation model. The method of estimation was then implemented to an empirical data in case of the construction the patient loyalty model. In the hypothesis model, service quality, patient satisfaction and patient loyalty were determined simultaneously, each factor were measured by any indicator variables. The respondents involved in this study were the patients who ever got healthcare at Puskesmas in Padang, West Sumatera. All 394 respondents who had complete information were included in the analysis. This study found that each construct; service quality, patient satisfaction and patient loyalty were valid. It means that all hypothesized indicator variables were significant to measure their corresponding latent variable. Service quality is the most measured by tangible, patient satisfaction is the most mesured by satisfied on service and patient loyalty is the most measured by good service quality. Meanwhile in structural equation, this study found that patient loyalty was affected by patient satisfaction positively and directly. Service quality affected patient loyalty indirectly with patient satisfaction as mediator variable between both latent variables. Both structural equations were also valid. This study also proved that validity values which obtained here were also consistence based on simulation study using bootstrap approach.

  11. Dynamic consistency of leader/fringe models of exhaustible resource markets

    International Nuclear Information System (INIS)

    Pelot, R.P.

    1990-01-01

    A dynamic feedback pricing model is developed for a leader/fringe supply market of exhaustible resources. The discrete game optimization model includes marginal costs which may be quadratic functions of cumulative production, a linear demand curve and variable length periods. The multiperiod formulation is based on the nesting of later periods' Kuhn-Tucker conditions into earlier periods' optimizations. This procedure leads to dynamically consistent solutions where the leader's strategy is credible as he has no incentive to alter his original plan at some later stage. A static leader-fringe model may yield multiple local optima. This can result in the leader forcing the fringe to produce at their capacity constraint, which would otherwise be non-binding if it is greater than the fringe's unconstrained optimal production rate. Conditions are developed where the optimal solution occurs at a corner where constraints meet, of which limit pricing is a special case. The 2-period leader/fringe feedback model is compared to the computationally simpler open-loop model. Under certain conditions, the open-loop model yields the same result as the feedback model. A multiperiod feedback model of the world oil market with OPEC as price-leader and the remaining world oil suppliers comprising the fringe is compared with the open-loop solution. The optimal profits and prices are very similar, but large differences in production rates may occur. The exhaustion date predicted by the open-loop model may also differ from the feedback outcome. Some numerical tests result in non-contiguous production periods for a player or limit pricing phases. 85 refs., 60 figs., 30 tabs

  12. Consistent phase-change modeling for CO2-based heat mining operation

    DEFF Research Database (Denmark)

    Singh, Ashok Kumar; Veje, Christian

    2017-01-01

    The accuracy of mathematical modeling of phase-change phenomena is limited if a simple, less accurate equation of state completes the governing partial differential equation. However, fluid properties (such as density, dynamic viscosity and compressibility) and saturation state are calculated using...... a highly accurate, complex equation of state. This leads to unstable and inaccurate simulation as the equation of state and governing partial differential equations are mutually inconsistent. In this study, the volume-translated Peng–Robinson equation of state was used with emphasis to model the liquid......–gas phase transition with more accuracy and consistency. Calculation of fluid properties and saturation state were based on the volume translated Peng–Robinson equation of state and results verified. The present model has been applied to a scenario to simulate a CO2-based heat mining process. In this paper...

  13. A thermodynamically consistent model for granular-fluid mixtures considering pore pressure evolution and hypoplastic behavior

    Science.gov (United States)

    Hess, Julian; Wang, Yongqi

    2016-11-01

    A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.

  14. Self-consistent Random Phase Approximation applied to a schematic model of the field theory

    International Nuclear Information System (INIS)

    Bertrand, Thierry

    1998-01-01

    The self-consistent Random Phase Approximation (SCRPA) is a method allowing in the mean-field theory inclusion of the correlations in the ground and excited states. It has the advantage of not violating the Pauli principle in contrast to RPA, that is based on the quasi-bosonic approximation; in addition, numerous applications in different domains of physics, show a possible variational character. However, the latter should be formally demonstrated. The first model studied with SCRPA is the anharmonic oscillator in the region where one of its symmetries is spontaneously broken. The ground state energy is reproduced by SCRPA more accurately than RPA, with no violation of the Ritz variational principle, what is not the case for the latter approximation. The success of SCRPA is the the same in case of ground state energy for a model mixing bosons and fermions. At the transition point the SCRPA is correcting RPA drastically, but far from this region the correction becomes negligible, both methods being of similar precision. In the deformed region in the case of RPA a spurious mode occurred due to the microscopical character of the model.. The SCRPA may also reproduce this mode very accurately and actually it coincides with an excitation in the exact spectrum

  15. A pedestal temperature model with self-consistent calculation of safety factor and magnetic shear

    International Nuclear Information System (INIS)

    Onjun, T; Siriburanon, T; Onjun, O

    2008-01-01

    A pedestal model based on theory-motivated models for the pedestal width and the pedestal pressure gradient is developed for the temperature at the top of the H-mode pedestal. The pedestal width model based on magnetic shear and flow shear stabilization is used in this study, where the pedestal pressure gradient is assumed to be limited by first stability of infinite n ballooning mode instability. This pedestal model is implemented in the 1.5D BALDUR integrated predictive modeling code, where the safety factor and magnetic shear are solved self-consistently in both core and pedestal regions. With the self-consistently approach for calculating safety factor and magnetic shear, the effect of bootstrap current can be correctly included in the pedestal model. The pedestal model is used to provide the boundary conditions in the simulations and the Multi-mode core transport model is used to describe the core transport. This new integrated modeling procedure of the BALDUR code is used to predict the temperature and density profiles of 26 H-mode discharges. Simulations are carried out for 13 discharges in the Joint European Torus and 13 discharges in the DIII-D tokamak. The average root-mean-square deviation between experimental data and the predicted profiles of the temperature and the density, normalized by their central values, is found to be about 14%

  16. Self-consistent model for pulsed direct-current N2 glow discharge

    International Nuclear Information System (INIS)

    Liu Chengsen

    2005-01-01

    A self-consistent analysis of a pulsed direct-current (DC) N 2 glow discharge is presented. The model is based on a numerical solution of the continuity equations for electron and ions coupled with Poisson's equation. The spatial-temporal variations of ionic and electronic densities and electric field are obtained. The electric field structure exhibits all the characteristic regions of a typical glow discharge (the cathode fall, the negative glow, and the positive column). Current-voltage characteristics of the discharge can be obtained from the model. The calculated current-voltage results using a constant secondary electron emission coefficient for the gas pressure 133.32 Pa are in reasonable agreement with experiment. (authors)

  17. Self-consistent Dark Matter simplified models with an s-channel scalar mediator

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Nicole F.; Busoni, Giorgio; Sanderson, Isaac W., E-mail: n.bell@unimelb.edu.au, E-mail: giorgio.busoni@unimelb.edu.au, E-mail: isanderson@student.unimelb.edu.au [ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, The University of Melbourne, Victoria 3010 (Australia)

    2017-03-01

    We examine Simplified Models in which fermionic DM interacts with Standard Model (SM) fermions via the exchange of an s -channel scalar mediator. The single-mediator version of this model is not gauge invariant, and instead we must consider models with two scalar mediators which mix and interfere. The minimal gauge invariant scenario involves the mixing of a new singlet scalar with the Standard Model Higgs boson, and is tightly constrained. We construct two Higgs doublet model (2HDM) extensions of this scenario, where the singlet mixes with the 2nd Higgs doublet. Compared with the one doublet model, this provides greater freedom for the masses and mixing angle of the scalar mediators, and their coupling to SM fermions. We outline constraints on these models, and discuss Yukawa structures that allow enhanced couplings, yet keep potentially dangerous flavour violating processes under control. We examine the direct detection phenomenology of these models, accounting for interference of the scalar mediators, and interference of different quarks in the nucleus. Regions of parameter space consistent with direct detection measurements are determined.

  18. Self-consistent Dark Matter simplified models with an s-channel scalar mediator

    International Nuclear Information System (INIS)

    Bell, Nicole F.; Busoni, Giorgio; Sanderson, Isaac W.

    2017-01-01

    We examine Simplified Models in which fermionic DM interacts with Standard Model (SM) fermions via the exchange of an s -channel scalar mediator. The single-mediator version of this model is not gauge invariant, and instead we must consider models with two scalar mediators which mix and interfere. The minimal gauge invariant scenario involves the mixing of a new singlet scalar with the Standard Model Higgs boson, and is tightly constrained. We construct two Higgs doublet model (2HDM) extensions of this scenario, where the singlet mixes with the 2nd Higgs doublet. Compared with the one doublet model, this provides greater freedom for the masses and mixing angle of the scalar mediators, and their coupling to SM fermions. We outline constraints on these models, and discuss Yukawa structures that allow enhanced couplings, yet keep potentially dangerous flavour violating processes under control. We examine the direct detection phenomenology of these models, accounting for interference of the scalar mediators, and interference of different quarks in the nucleus. Regions of parameter space consistent with direct detection measurements are determined.

  19. Self-consistent Modeling of Elastic Anisotropy in Shale

    Science.gov (United States)

    Kanitpanyacharoen, W.; Wenk, H.; Matthies, S.; Vasin, R.

    2012-12-01

    Elastic anisotropy in clay-rich sedimentary rocks has increasingly received attention because of significance for prospecting of petroleum deposits, as well as seals in the context of nuclear waste and CO2 sequestration. The orientation of component minerals and pores/fractures is a critical factor that influences elastic anisotropy. In this study, we investigate lattice and shape preferred orientation (LPO and SPO) of three shales from the North Sea in UK, the Qusaiba Formation in Saudi Arabia, and the Officer Basin in Australia (referred to as N1, Qu3, and L1905, respectively) to calculate elastic properties and compare them with experimental results. Synchrotron hard X-ray diffraction and microtomography experiments were performed to quantify LPO, weight proportions, and three-dimensional SPO of constituent minerals and pores. Our preliminary results show that the degree of LPO and total amount of clays are highest in Qu3 (3.3-6.5 m.r.d and 74vol%), moderately high in N1 (2.4-5.6 m.r.d. and 70vol%), and lowest in L1905 (2.3-2.5 m.r.d. and 42vol%). In addition, porosity in Qu3 is as low as 2% while it is up to 6% in L1605 and 8% in N1, respectively. Based on this information and single crystal elastic properties of mineral components, we apply a self-consistent averaging method to calculate macroscopic elastic properties and corresponding seismic velocities for different shales. The elastic model is then compared with measured acoustic velocities on the same samples. The P-wave velocities measured from Qu3 (4.1-5.3 km/s, 26.3%Ani.) are faster than those obtained from L1905 (3.9-4.7 km/s, 18.6%Ani.) and N1 (3.6-4.3 km/s, 17.7%Ani.). By making adjustments for pore structure (aspect ratio) and single crystal elastic properties of clay minerals, a good agreement between our calculation and the ultrasonic measurement is obtained.

  20. Height-Diameter Models for Mixed-Species Forests Consisting of Spruce, Fir, and Beech

    Directory of Open Access Journals (Sweden)

    Petráš Rudolf

    2014-06-01

    Full Text Available Height-diameter models define the general relationship between the tree height and diameter at each growth stage of the forest stand. This paper presents generalized height-diameter models for mixed-species forest stands consisting of Norway spruce (Picea abies Karst., Silver fir (Abies alba L., and European beech (Fagus sylvatica L. from Slovakia. The models were derived using two growth functions from the exponential family: the two-parameter Michailoff and three-parameter Korf functions. Generalized height-diameter functions must normally be constrained to pass through the mean stand diameter and height, and then the final growth model has only one or two parameters to be estimated. These “free” parameters are then expressed over the quadratic mean diameter, height and stand age and the final mathematical form of the model is obtained. The study material included 50 long-term experimental plots located in the Western Carpathians. The plots were established 40-50 years ago and have been repeatedly measured at 5 to 10-year intervals. The dataset includes 7,950 height measurements of spruce, 21,661 of fir and 5,794 of beech. As many as 9 regression models were derived for each species. Although the “goodness of fit” of all models showed that they were generally well suited for the data, the best results were obtained for silver fir. The coefficient of determination ranged from 0.946 to 0.948, RMSE (m was in the interval 1.94-1.97 and the bias (m was -0.031 to 0.063. Although slightly imprecise parameter estimation was established for spruce, the estimations of the regression parameters obtained for beech were quite less precise. The coefficient of determination for beech was 0.854-0.860, RMSE (m 2.67-2.72, and the bias (m ranged from -0.144 to -0.056. The majority of models using Korf’s formula produced slightly better estimations than Michailoff’s, and it proved immaterial which estimated parameter was fixed and which parameters

  1. Consistency and discrepancy in the atmospheric response to Arctic sea-ice loss across climate models

    Science.gov (United States)

    Screen, James A.; Deser, Clara; Smith, Doug M.; Zhang, Xiangdong; Blackport, Russell; Kushner, Paul J.; Oudar, Thomas; McCusker, Kelly E.; Sun, Lantao

    2018-03-01

    The decline of Arctic sea ice is an integral part of anthropogenic climate change. Sea-ice loss is already having a significant impact on Arctic communities and ecosystems. Its role as a cause of climate changes outside of the Arctic has also attracted much scientific interest. Evidence is mounting that Arctic sea-ice loss can affect weather and climate throughout the Northern Hemisphere. The remote impacts of Arctic sea-ice loss can only be properly represented using models that simulate interactions among the ocean, sea ice, land and atmosphere. A synthesis of six such experiments with different models shows consistent hemispheric-wide atmospheric warming, strongest in the mid-to-high-latitude lower troposphere; an intensification of the wintertime Aleutian Low and, in most cases, the Siberian High; a weakening of the Icelandic Low; and a reduction in strength and southward shift of the mid-latitude westerly winds in winter. The atmospheric circulation response seems to be sensitive to the magnitude and geographic pattern of sea-ice loss and, in some cases, to the background climate state. However, it is unclear whether current-generation climate models respond too weakly to sea-ice change. We advocate for coordinated experiments that use different models and observational constraints to quantify the climate response to Arctic sea-ice loss.

  2. Precommitted Investment Strategy versus Time-Consistent Investment Strategy for a General Risk Model with Diffusion

    Directory of Open Access Journals (Sweden)

    Lidong Zhang

    2014-01-01

    Full Text Available We mainly study a general risk model and investigate the precommitted strategy and the time-consistent strategy under mean-variance criterion, respectively. A lagrange method is proposed to derive the precommitted investment strategy. Meanwhile from the game theoretical perspective, we find the time-consistent investment strategy by solving the extended Hamilton-Jacobi-Bellman equations. By comparing the precommitted strategy with the time-consistent strategy, we find that the company under the time-consistent strategy has to give up the better current utility in order to keep a consistent satisfaction over the whole time horizon. Furthermore, we theoretically and numerically provide the effect of the parameters on these two optimal strategies and the corresponding value functions.

  3. Commensurate comparisons of models with energy budget observations reveal consistent climate sensitivities

    Science.gov (United States)

    Armour, K.

    2017-12-01

    Global energy budget observations have been widely used to constrain the effective, or instantaneous climate sensitivity (ICS), producing median estimates around 2°C (Otto et al. 2013; Lewis & Curry 2015). A key question is whether the comprehensive climate models used to project future warming are consistent with these energy budget estimates of ICS. Yet, performing such comparisons has proven challenging. Within models, values of ICS robustly vary over time, as surface temperature patterns evolve with transient warming, and are generally smaller than the values of equilibrium climate sensitivity (ECS). Naively comparing values of ECS in CMIP5 models (median of about 3.4°C) to observation-based values of ICS has led to the suggestion that models are overly sensitive. This apparent discrepancy can partially be resolved by (i) comparing observation-based values of ICS to model values of ICS relevant for historical warming (Armour 2017; Proistosescu & Huybers 2017); (ii) taking into account the "efficacies" of non-CO2 radiative forcing agents (Marvel et al. 2015); and (iii) accounting for the sparseness of historical temperature observations and differences in sea-surface temperature and near-surface air temperature over the oceans (Richardson et al. 2016). Another potential source of discrepancy is a mismatch between observed and simulated surface temperature patterns over recent decades, due to either natural variability or model deficiencies in simulating historical warming patterns. The nature of the mismatch is such that simulated patterns can lead to more positive radiative feedbacks (higher ICS) relative to those engendered by observed patterns. The magnitude of this effect has not yet been addressed. Here we outline an approach to perform fully commensurate comparisons of climate models with global energy budget observations that take all of the above effects into account. We find that when apples-to-apples comparisons are made, values of ICS in models are

  4. Shingle 2.0: generalising self-consistent and automated domain discretisation for multi-scale geophysical models

    Science.gov (United States)

    Candy, Adam S.; Pietrzak, Julie D.

    2018-01-01

    The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale, anisotropic, fully unstructured meshes where a relatively large number of heterogeneous parameters are required to constrain their full description. As a consequence, it can be difficult to reproduce simulations, to ensure a provenance in model data handling and initialisation, and a challenge to conduct model intercomparisons rigorously. This paper takes a novel approach to spatial discretisation, considering it much like a numerical simulation model problem of its own. It introduces a generalised, extensible, self-documenting approach to carefully describe, and necessarily fully, the constraints over the heterogeneous parameter space that determine how a domain is spatially discretised. This additionally provides a method to accurately record these constraints, using high-level natural language based abstractions that enable full accounts of provenance, sharing, and distribution. Together with this description, a generalised consistent approach to unstructured mesh generation for geophysical models is developed that is automated, robust and repeatable, quick-to-draft, rigorously verified, and consistent with the source data throughout. This interprets the description above to execute a self-consistent spatial discretisation process, which is automatically validated to expected discrete characteristics and metrics. Library code, verification tests, and examples available in the repository at https://github.com/shingleproject/Shingle. Further details of the project presented at http://shingleproject.org.

  5. Development of a self-consistent lightning NOx simulation in large-scale 3-D models

    Science.gov (United States)

    Luo, Chao; Wang, Yuhang; Koshak, William J.

    2017-03-01

    We seek to develop a self-consistent representation of lightning NOx (LNOx) simulation in a large-scale 3-D model. Lightning flash rates are parameterized functions of meteorological variables related to convection. We examine a suite of such variables and find that convective available potential energy and cloud top height give the best estimates compared to July 2010 observations from ground-based lightning observation networks. Previous models often use lightning NOx vertical profiles derived from cloud-resolving model simulations. An implicit assumption of such an approach is that the postconvection lightning NOx vertical distribution is the same for all deep convection, regardless of geographic location, time of year, or meteorological environment. Detailed observations of the lightning channel segment altitude distribution derived from the NASA Lightning Nitrogen Oxides Model can be used to obtain the LNOx emission profile. Coupling such a profile with model convective transport leads to a more self-consistent lightning distribution compared to using prescribed postconvection profiles. We find that convective redistribution appears to be a more important factor than preconvection LNOx profile selection, providing another reason for linking the strength of convective transport to LNOx distribution.

  6. Consistent modelling of wind turbine noise propagation from source to receiver.

    Science.gov (United States)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick

    2017-11-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.

  7. Interstellar turbulence model : A self-consistent coupling of plasma and neutral fluids

    International Nuclear Information System (INIS)

    Shaikh, Dastgeer; Zank, Gary P.; Pogorelov, Nikolai

    2006-01-01

    We present results of a preliminary investigation of interstellar turbulence based on a self-consistent two-dimensional fluid simulation model. Our model describes a partially ionized magnetofluid interstellar medium (ISM) that couples a neutral hydrogen fluid to a plasma through charge exchange interactions and assumes that the ISM turbulent correlation scales are much bigger than the shock characteristic length-scales, but smaller than the charge exchange mean free path length-scales. The shocks have no influence on the ISM turbulent fluctuations. We find that nonlinear interactions in coupled plasma-neutral ISM turbulence are influenced substantially by charge exchange processes

  8. Self-consistent modelling of ICRH

    International Nuclear Information System (INIS)

    Hellsten, T.; Hedin, J.; Johnson, T.; Laxaaback, M.; Tennfors, E.

    2001-01-01

    The performance of ICRH is often sensitive to the shape of the high energy part of the distribution functions of the resonating species. This requires self-consistent calculations of the distribution functions and the wave-field. In addition to the wave-particle interactions and Coulomb collisions the effects of the finite orbit width and the RF-induced spatial transport are found to be important. The inward drift dominates in general even for a symmetric toroidal wave spectrum in the centre of the plasma. An inward drift does not necessarily produce a more peaked heating profile. On the contrary, for low concentrations of hydrogen minority in deuterium plasmas it can even give rise to broader profiles. (author)

  9. Self-consistent electrodynamic scattering in the symmetric Bragg case

    International Nuclear Information System (INIS)

    Campos, H.S.

    1988-01-01

    We have analyzed the symmetric Bragg case, introducing a model of self consistent scattering for two elliptically polarized beams. The crystal is taken as a set of mathematical planes, each of them defined by a surface density of dipoles. We have considered the mesofield and the epifield differently from that of the Ewald's theory and, we assumed a plane of dipoles and the associated fields as a self consistent scattering unit. The exact analytical treatment when applied to any two neighbouring planes, results in a general and self consistent Bragg's equation, in terms of the amplitude and phase variations. The generalized solution for the set of N planes was obtained after introducing an absorption factor in the incident radiation, in two ways: (i) the analytical one, through a rule of field similarity, which says that the incidence occurs in both faces of the all crystal planes and also, through a matricial development with the Chebyshev polynomials; (ii) using the numerical solution we calculated, iteratively, the reflectivity, the reflection phase, the transmissivity, the transmission phase and the energy. The results are showed through reflection and transmission curves, which are characteristics as from kinematical as dynamical theories. The conservation of the energy results from the Ewald's self consistency principle is used. In the absorption case, the results show that it is not the only cause for the asymmetric form in the reflection curves. The model contains basic elements for a unified, microscope, self consistent, vectorial and exact formulation for interpretating the X ray diffraction in perfect crystals. (author)

  10. Self-consistent finite-temperature model of atom-laser coherence properties

    International Nuclear Information System (INIS)

    Fergusson, J.R.; Geddes, A.J.; Hutchinson, D.A.W.

    2005-01-01

    We present a mean-field model of a continuous-wave atom laser with Raman output coupling. The noncondensate is pumped at a fixed input rate which, in turn, pumps the condensate through a two-body scattering process obeying the Fermi golden rule. The gas is then coupled out by a Gaussian beam from the system, and the temperature and particle number are self-consistently evaluated against equilibrium constraints. We observe the dependence of the second-order coherence of the output upon the width of the output-coupling beam, and note that even in the presence of a highly coherent trapped gas, perfect coherence of the output matter wave is not guaranteed

  11. Consistent initial conditions for the Saint-Venant equations in river network modeling

    Directory of Open Access Journals (Sweden)

    C.-W. Yu

    2017-09-01

    Full Text Available Initial conditions for flows and depths (cross-sectional areas throughout a river network are required for any time-marching (unsteady solution of the one-dimensional (1-D hydrodynamic Saint-Venant equations. For a river network modeled with several Strahler orders of tributaries, comprehensive and consistent synoptic data are typically lacking and synthetic starting conditions are needed. Because of underlying nonlinearity, poorly defined or inconsistent initial conditions can lead to convergence problems and long spin-up times in an unsteady solver. Two new approaches are defined and demonstrated herein for computing flows and cross-sectional areas (or depths. These methods can produce an initial condition data set that is consistent with modeled landscape runoff and river geometry boundary conditions at the initial time. These new methods are (1 the pseudo time-marching method (PTM that iterates toward a steady-state initial condition using an unsteady Saint-Venant solver and (2 the steady-solution method (SSM that makes use of graph theory for initial flow rates and solution of a steady-state 1-D momentum equation for the channel cross-sectional areas. The PTM is shown to be adequate for short river reaches but is significantly slower and has occasional non-convergent behavior for large river networks. The SSM approach is shown to provide a rapid solution of consistent initial conditions for both small and large networks, albeit with the requirement that additional code must be written rather than applying an existing unsteady Saint-Venant solver.

  12. Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models

    KAUST Repository

    Vignal, Philippe

    2016-02-11

    Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are

  13. Consistency of Trend Break Point Estimator with Underspecified Break Number

    Directory of Open Access Journals (Sweden)

    Jingjing Yang

    2017-01-01

    Full Text Available This paper discusses the consistency of trend break point estimators when the number of breaks is underspecified. The consistency of break point estimators in a simple location model with level shifts has been well documented by researchers under various settings, including extensions such as allowing a time trend in the model. Despite the consistency of break point estimators of level shifts, there are few papers on the consistency of trend shift break point estimators in the presence of an underspecified break number. The simulation study and asymptotic analysis in this paper show that the trend shift break point estimator does not converge to the true break points when the break number is underspecified. In the case of two trend shifts, the inconsistency problem worsens if the magnitudes of the breaks are similar and the breaks are either both positive or both negative. The limiting distribution for the trend break point estimator is developed and closely approximates the finite sample performance.

  14. Implicit implementation and consistent tangent modulus of a viscoplastic model for polymers

    OpenAIRE

    ACHOUR, Nadia; CHATZIGEORGIOU, George; MERAGHNI, Fodil; CHEMISKY, Yves; FITOUSSI, Joseph

    2015-01-01

    In this work, the phenomenological viscoplastic DSGZ model (Duan et al., 2001 [13]), developed for glassy or semi-crystalline polymers, is numerically implemented in a three-dimensional framework, following an implicit formulation. The computational methodology is based on the radial return mapping algorithm. This implicit formulation leads to the definition of the consistent tangent modulus which permits the implementation in incremental micromechanical scale transition analysis. The extende...

  15. Self-consistent theory of hadron-nucleus scattering. Application to pion physics

    International Nuclear Information System (INIS)

    Johnson, M.B.

    1981-01-01

    The first part of this set of two seminars will consist of a review of several of the important accomplishments made in the last few years in the field of pion-nucleus physics. Next I discuss some questions raised by these accomplishments and show that for some very natural reasons the commonly employed theoretical methods cannot be applied to answer these questions. This situation leads to the idea of self-consistency, which is first explained in a general context. The remainder of the seminars are devoted to illustrating the idea within a simple multiple-scattering model for the case of pion scattering. An evaluation of the effectiveness of the self-consistent requirment to produce a solution to the model is made, and a few of the questions raised by recent accomplishments in the field of pion physics are addressed in the model. Finally, the results of the model calculation are compared to experimental data and implications of the results discussed. (orig./HSI)

  16. Quasiparticles and thermodynamical consistency

    International Nuclear Information System (INIS)

    Shanenko, A.A.; Biro, T.S.; Toneev, V.D.

    2003-01-01

    A brief and simple introduction into the problem of the thermodynamical consistency is given. The thermodynamical consistency relations, which should be taken into account under constructing a quasiparticle model, are found in a general manner from the finite-temperature extension of the Hellmann-Feynman theorem. Restrictions following from these relations are illustrated by simple physical examples. (author)

  17. Consistency relation for cosmic magnetic fields

    DEFF Research Database (Denmark)

    Jain, R. K.; Sloth, M. S.

    2012-01-01

    If cosmic magnetic fields are indeed produced during inflation, they are likely to be correlated with the scalar metric perturbations that are responsible for the cosmic microwave background anisotropies and large scale structure. Within an archetypical model of inflationary magnetogenesis, we show...... that there exists a new simple consistency relation for the non-Gaussian cross correlation function of the scalar metric perturbation with two powers of the magnetic field in the squeezed limit where the momentum of the metric perturbation vanishes. We emphasize that such a consistency relation turns out...... to be extremely useful to test some recent calculations in the literature. Apart from primordial non-Gaussianity induced by the curvature perturbations, such a cross correlation might provide a new observational probe of inflation and can in principle reveal the primordial nature of cosmic magnetic fields. DOI...

  18. Self-consistent model of the low-latitude boundary layer

    International Nuclear Information System (INIS)

    Phan, T.D.; Sonnerup, B.U.Oe.; Lotko, W.

    1989-01-01

    A simple two-dimensional, steady state, viscous model of the dawnside and duskside low-latitude boundary layer (LLBL) has been developed. It incorporates coupling to the ionosphere via field-aligned currents and associated field-aligned potential drops, governed by a simple conductance law, and it describes boundary layer currents, magnetic fields, and plasma flow in a self-consistent manner. The magnetic field induced by these currents leads to two effects: (1) a diamagnetic depression of the magnetic field in the equatorial region and (2) bending of the field lines into parabolas in the xz plane with their vertices in the equatorial plane, at z = 0, and pointing in the flow direction, i.e., tailward. Both effects are strongest at the magnetopause edge of the boundary layer and vanish at the magnetospheric edge. The diamagnetic depression corresponds to an excess of plasma pressure in the equatorial boundary layer near the magnetopause. The boundary layer structure is governed by a fourth-order, nonlinear, ordinary differential equation in which one nondimensional parameter, the Hartmann number M, appears. A second parameter, introduced via the boundary conditions, is a nondimensional flow velocity v 0 * at the magnetopause. Numerical results from the model are presented and the possible use of observations to determine the model parameters is discussed. The main new contribution of the study is to provide a better description of the field and plasma configuration in the LLBL itself and to clarify in quantitative terms the circumstances in which induced magnetic fields become important

  19. Physically-consistent wall boundary conditions for the k-ω turbulence model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Dixen, Martin; Jacobsen, Niels Gjøl

    2010-01-01

    A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components of the fluc......A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components...... of the fluctuating velocity signal. Both conventional k = 0 and dk/dy = 0 wall boundary conditions are considered. Results indicate that either condition can provide accurate solutions, for the bulk of the flow, over both smooth and rough beds. It is argued that the zero-gradient condition is more consistent...... with the near wall physics, however, as it allows direct integration through a viscous sublayer near smooth walls, while avoiding a viscous sublayer near rough walls. This is in contrast to the conventional k = 0 wall boundary condition, which forces resolution of a viscous sublayer in all circumstances...

  20. Using nudging to improve global-regional dynamic consistency in limited-area climate modeling: What should we nudge?

    Science.gov (United States)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2015-03-01

    Regional climate modelling sometimes requires that the regional model be nudged towards the large-scale driving data to avoid the development of inconsistencies between them. These inconsistencies are known to produce large surface temperature and rainfall artefacts. Therefore, it is essential to maintain the synoptic circulation within the simulation domain consistent with the synoptic circulation at the domain boundaries. Nudging techniques, initially developed for data assimilation purposes, are increasingly used in regional climate modeling and offer a workaround to this issue. In this context, several questions on the "optimal" use of nudging are still open. In this study we focus on a specific question which is: What variable should we nudge? in order to maintain the consistencies between the regional model and the driving fields as much as possible. For that, a "Big Brother Experiment", where a reference atmospheric state is known, is conducted using the weather research and forecasting (WRF) model over the Euro-Mediterranean region. A set of 22 3-month simulations is performed with different sets of nudged variables and nudging options (no nudging, indiscriminate nudging, spectral nudging) for summer and winter. The results show that nudging clearly improves the model capacity to reproduce the reference fields. However the skill scores depend on the set of variables used to nudge the regional climate simulations. Nudging the tropospheric horizontal wind is by far the key variable to nudge to simulate correctly surface temperature and wind, and rainfall. To a lesser extent, nudging tropospheric temperature also contributes to significantly improve the simulations. Indeed, nudging tropospheric wind or temperature directly impacts the simulation of the tropospheric geopotential height and thus the synoptic scale atmospheric circulation. Nudging moisture improves the precipitation but the impact on the other fields (wind and temperature) is not significant. As

  1. Bitcoin Meets Strong Consistency

    OpenAIRE

    Decker, Christian; Seidel, Jochen; Wattenhofer, Roger

    2014-01-01

    The Bitcoin system only provides eventual consistency. For everyday life, the time to confirm a Bitcoin transaction is prohibitively slow. In this paper we propose a new system, built on the Bitcoin blockchain, which enables strong consistency. Our system, PeerCensus, acts as a certification authority, manages peer identities in a peer-to-peer network, and ultimately enhances Bitcoin and similar systems with strong consistency. Our extensive analysis shows that PeerCensus is in a secure state...

  2. Self-consistent nonlinear transmission line model of standing wave effects in a capacitive discharge

    International Nuclear Information System (INIS)

    Chabert, P.; Raimbault, J.L.; Rax, J.M.; Lieberman, M.A.

    2004-01-01

    It has been shown previously [Lieberman et al., Plasma Sources Sci. Technol. 11, 283 (2002)], using a non-self-consistent model based on solutions of Maxwell's equations, that several electromagnetic effects may compromise capacitive discharge uniformity. Among these, the standing wave effect dominates at low and moderate electron densities when the driving frequency is significantly greater than the usual 13.56 MHz. In the present work, two different global discharge models have been coupled to a transmission line model and used to obtain the self-consistent characteristics of the standing wave effect. An analytical solution for the wavelength λ was derived for the lossless case and compared to the numerical results. For typical plasma etching conditions (pressure 10-100 mTorr), a good approximation of the wavelength is λ/λ 0 ≅40 V 0 1/10 l -1/2 f -2/5 , where λ 0 is the wavelength in vacuum, V 0 is the rf voltage magnitude in volts at the discharge center, l is the electrode spacing in meters, and f the driving frequency in hertz

  3. Consistent modelling of wind turbine noise propagation from source to receiver

    DEFF Research Database (Denmark)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong

    2017-01-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine...... propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine....... and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound...

  4. Is ecological personality always consistent with low-carbon behavioral intention of urban residents?

    International Nuclear Information System (INIS)

    Wei, Jia; Chen, Hong; Long, Ruyin

    2016-01-01

    In the field of low-carbon economics, researchers have become interested in residential consumption as a potential means for reducing carbon emissions. By analyzing and expanding the fundamental concept of personality, a type of personality, namely ecological personality (EP), was defined and a structural model of EP was constructed based on a five-factor model. The study surveyed 890 urban residents to examine the relationship between EP and low-carbon behavioral intention (LCBI). Ecological personality is a five-dimensional concept comprising eco-neuroticism, eco-agreeableness, eco-openness, eco-extraversion, and eco-conscientiousness. Ecological personality traits were positively correlated with the LCBI. However, a quadrifid graph model showed that the EP is not always consistent with LCBI, and respondents fell into two groups: one group comprised ecological residents with consistent traits (positive EP and high LCBI) and non-ecological residents with consistent traits (negative EP and low LCBI), and their EP was consistent with LCBI; the other group comprised ecological residents with gap traits (positive EP and low LCBI) and non-ecological residents with gap traits (negative EP and high LCBI), and neither showed any consistency between personality and intentions. A policy to guide the conversion of different groups into ecological residents with consistent traits is discussed. - Highlights: • The structural model of ecological personality was constructed. • The relationship between personality and behavioral intention was examined. • Ecological personality and low-carbon behavioral intention donot always match up. • A policy urging residents to be ecological was discussed.

  5. Self-consistent electronic structure of a model stage-1 graphite acceptor intercalate

    International Nuclear Information System (INIS)

    Campagnoli, G.; Tosatti, E.

    1981-04-01

    A simple but self-consistent LCAO scheme is used to study the π-electronic structure of an idealized stage-1 ordered graphite acceptor intercalate, modeled approximately on C 8 AsF 5 . The resulting non-uniform charge population within the carbon plane, band structure, optical and energy loss properties are discussed and compared with available spectroscopic evidence. The calculated total energy is used to estimate migration energy barriers, and the intercalate vibration mode frequency. (author)

  6. A self-consistent model of the three-phase interstellar medium in disk galaxies

    International Nuclear Information System (INIS)

    Wang, Z.

    1989-01-01

    In the present study the author analyzes a number of physical processes concerning velocity and spatial distributions, ionization structure, pressure variation, mass and energy balance, and equation of state of the diffuse interstellar gas in a three phase model. He also considers the effects of this model on the formation of molecular clouds and the evolution of disk galaxies. The primary purpose is to incorporate self-consistently the interstellar conditions in a typical late-type galaxy, and to relate these to various observed large-scale phenomena. He models idealized situations both analytically and numerically, and compares the results with observational data of the Milky Way Galaxy and other nearby disk galaxies. Several main conclusions of this study are: (1) the highly ionized gas found in the lower Galactic halo is shown to be consistent with a model in which the gas is photoionized by the diffuse ultraviolet radiation; (2) in a quasi-static and self-regulatory configuration, the photoelectric effects of interstellar grains are primarily responsible for heating the cold (T ≅ 100K) gas; the warm (T ≅ 8,000K) gas may be heated by supernova remnants and other mechanisms; (3) the large-scale atomic and molecular gas distributions in a sample of 15 disk galaxies can be well explained if molecular cloud formation and star formation follow a modified Schmidt Law; a scaling law for the radial gas profiles is proposed based on this model, and it is shown to be applicable to the nearby late-type galaxies where radio mapping data is available; for disk galaxies of earlier type, the effect of their massive central bulges may have to be taken into account

  7. Developing consistent pronunciation models for phonemic variants

    CSIR Research Space (South Africa)

    Davel, M

    2006-09-01

    Full Text Available Pronunciation lexicons often contain pronunciation variants. This can create two problems: It can be difficult to define these variants in an internally consistent way and it can also be difficult to extract generalised grapheme-to-phoneme rule sets...

  8. Is cosmology consistent?

    International Nuclear Information System (INIS)

    Wang Xiaomin; Tegmark, Max; Zaldarriaga, Matias

    2002-01-01

    We perform a detailed analysis of the latest cosmic microwave background (CMB) measurements (including BOOMERaNG, DASI, Maxima and CBI), both alone and jointly with other cosmological data sets involving, e.g., galaxy clustering and the Lyman Alpha Forest. We first address the question of whether the CMB data are internally consistent once calibration and beam uncertainties are taken into account, performing a series of statistical tests. With a few minor caveats, our answer is yes, and we compress all data into a single set of 24 bandpowers with associated covariance matrix and window functions. We then compute joint constraints on the 11 parameters of the 'standard' adiabatic inflationary cosmological model. Our best fit model passes a series of physical consistency checks and agrees with essentially all currently available cosmological data. In addition to sharp constraints on the cosmic matter budget in good agreement with those of the BOOMERaNG, DASI and Maxima teams, we obtain a heaviest neutrino mass range 0.04-4.2 eV and the sharpest constraints to date on gravity waves which (together with preference for a slight red-tilt) favor 'small-field' inflation models

  9. Time Consistent Strategies for Mean-Variance Asset-Liability Management Problems

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2013-01-01

    Full Text Available This paper studies the optimal time consistent investment strategies in multiperiod asset-liability management problems under mean-variance criterion. By applying time consistent model of Chen et al. (2013 and employing dynamic programming technique, we derive two-time consistent policies for asset-liability management problems in a market with and without a riskless asset, respectively. We show that the presence of liability does affect the optimal strategy. More specifically, liability leads a parallel shift of optimal time-consistent investment policy. Moreover, for an arbitrarily risk averse investor (under the variance criterion with liability, the time-diversification effects could be ignored in a market with a riskless asset; however, it should be considered in a market without any riskless asset.

  10. Quest for consistent modelling of statistical decay of the compound nucleus

    Science.gov (United States)

    Banerjee, Tathagata; Nath, S.; Pal, Santanu

    2018-01-01

    A statistical model description of heavy ion induced fusion-fission reactions is presented where shell effects, collective enhancement of level density, tilting away effect of compound nuclear spin and dissipation are included. It is shown that the inclusion of all these effects provides a consistent picture of fission where fission hindrance is required to explain the experimental values of both pre-scission neutron multiplicities and evaporation residue cross-sections in contrast to some of the earlier works where a fission hindrance is required for pre-scission neutrons but a fission enhancement for evaporation residue cross-sections.

  11. Consistent classical supergravity theories

    International Nuclear Information System (INIS)

    Muller, M.

    1989-01-01

    This book offers a presentation of both conformal and Poincare supergravity. The consistent four-dimensional supergravity theories are classified. The formulae needed for further modelling are included

  12. Self-consistent electrostatic potential due to trapped plasma in the magnetosphere

    International Nuclear Information System (INIS)

    Miller, R.H.; Khazanov, G.V.

    1993-01-01

    The authors address the problem of the steady state confinement of plasma in a magnetic flux tube. They construct a steady state distribution function, under the assumption of no waves or collisions, using the kinematic constants of the motion, total energy and magnetic moment. The local particle densities are shown to be integrals over the equatorial distribution function for the particle of concern. The electric potential is determined by the imposition of quasineutrality. The authors show that their self consistent model produces potential drops which are consistent with the kinetic energy of the equatorially trapped particles. They comment on earlier work of Alfven and Faelthammar, and for a bi-Maxwellian distribution compare the results of the present model with the Alfven and Faelthammar model

  13. A thermodynamically consistent quasi-particle model without temperature-dependent infinity of the vacuum zero point energy

    International Nuclear Information System (INIS)

    Cao Jing; Jiang Yu; Sun Weimin; Zong Hongshi

    2012-01-01

    In this Letter, an improved quasi-particle model is presented. Unlike the previous approach of establishing quasi-particle model, we introduce a classical background field (it is allowed to depend on the temperature) to deal with the infinity of thermal vacuum energy which exists in previous quasi-particle models. After taking into account the effect of this classical background field, the partition function of quasi-particle system can be made well-defined. Based on this and following the standard ensemble theory, we construct a thermodynamically consistent quasi-particle model without the need of any reformulation of statistical mechanics or thermodynamical consistency relation. As an application of our model, we employ it to the case of (2+1) flavor QGP at zero chemical potential and finite temperature and obtain a good fit to the recent lattice simulation results of Borsányi et al. A comparison of the result of our model with early calculations using other models is also presented. It is shown that our method is general and can be generalized to the case where the effective mass depends not only on the temperature but also on the chemical potential.

  14. Three-dimensional self-consistent radiation transport model for the fluid simulation of plasma display panel cell

    International Nuclear Information System (INIS)

    Kim, H.C.; Yang, S.S.; Lee, J.K.

    2003-01-01

    In plasma display panels (PDPs), the resonance radiation trapping is one of the important processes. In order to incorporate this effect in a PDP cell, a three-dimensional radiation transport model is self-consistently coupled with a fluid simulation. This model is compared with the conventional trapping factor method in gas mixtures of neon and xenon. It shows the differences in the time evolutions of spatial profile and the total number of resonant excited states, especially in the afterglow. The generation rates of UV light are also compared for the two methods. The visible photon flux reaching the output window from the phosphor layers as well as the total UV photon flux arriving at the phosphor layer from the plasma region are calculated for resonant and nonresonant excited species. From these calculations, the time-averaged spatial profiles of the UV flux on the phosphor layers and the visible photon flux through the output window are obtained. Finally, the diagram of the energy efficiency and the contribution of each UV light are shown

  15. Self-consistent quark bags

    International Nuclear Information System (INIS)

    Rafelski, J.

    1979-01-01

    After an introductory overview of the bag model the author uses the self-consistent solution of the coupled Dirac-meson fields to represent a bound state of strongly ineteracting fermions. In this framework he discusses the vivial approach to classical field equations. After a short description of the used numerical methods the properties of bound states of scalar self-consistent Fields and the solutions of a self-coupled Dirac field are considered. (HSI) [de

  16. Model shows future cut in U.S. ozone levels

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    A joint U.S. auto-oil industry research program says modeling shows that changing gasoline composition can reduce ozone levels for Los Angeles in 2010 and for New York City and Dallas-Fort Worth in 2005. The air quality modeling was based on vehicle emissions research data released late last year (OGJ, Dec. 24, 1990, p. 20). The effort is sponsored by the big three auto manufacturers and 14 oil companies. Sponsors the cars and small trucks account for about one third of ozone generated in the three cities studied but by 2005-10 will account for only 5-9%

  17. Self consistent MHD modeling of the solar wind from coronal holes with distinct geometries

    Science.gov (United States)

    Stewart, G. A.; Bravo, S.

    1995-01-01

    Utilizing an iterative scheme, a self-consistent axisymmetric MHD model for the solar wind has been developed. We use this model to evaluate the properties of the solar wind issuing from the open polar coronal hole regions of the Sun, during solar minimum. We explore the variation of solar wind parameters across the extent of the hole and we investigate how these variations are affected by the geometry of the hole and the strength of the field at the coronal base.

  18. Consistency in the World Wide Web

    DEFF Research Database (Denmark)

    Thomsen, Jakob Grauenkjær

    Tim Berners-Lee envisioned that computers will behave as agents of humans on the World Wide Web, where they will retrieve, extract, and interact with information from the World Wide Web. A step towards this vision is to make computers capable of extracting this information in a reliable...... and consistent way. In this dissertation we study steps towards this vision by showing techniques for the specication, the verication and the evaluation of the consistency of information in the World Wide Web. We show how to detect certain classes of errors in a specication of information, and we show how...... the World Wide Web, in order to help perform consistent evaluations of web extraction techniques. These contributions are steps towards having computers reliable and consistently extract information from the World Wide Web, which in turn are steps towards achieving Tim Berners-Lee's vision. ii...

  19. Consistent and Conservative Model Selection with the Adaptive LASSO in Stationary and Nonstationary Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    2016-01-01

    We show that the adaptive Lasso is oracle efficient in stationary and nonstationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency...

  20. Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers

    Science.gov (United States)

    Cartar, William; Mørk, Jesper; Hughes, Stephen

    2017-08-01

    We present a powerful computational approach to simulate the threshold behavior of photonic-crystal quantum-dot (QD) lasers. Using a finite-difference time-domain (FDTD) technique, Maxwell-Bloch equations representing a system of thousands of statistically independent and randomly positioned two-level emitters are solved numerically. Phenomenological pure dephasing and incoherent pumping is added to the optical Bloch equations to allow for a dynamical lasing regime, but the cavity-mediated radiative dynamics and gain coupling of each QD dipole (artificial atom) is contained self-consistently within the model. These Maxwell-Bloch equations are implemented by using Lumerical's flexible material plug-in tool, which allows a user to define additional equations of motion for the nonlinear polarization. We implement the gain ensemble within triangular-lattice photonic-crystal cavities of various length N (where N refers to the number of missing holes), and investigate the cavity mode characteristics and the threshold regime as a function of cavity length. We develop effective two-dimensional model simulations which are derived after studying the full three-dimensional passive material structures by matching the cavity quality factors and resonance properties. We also demonstrate how to obtain the correct point-dipole radiative decay rate from Fermi's golden rule, which is captured naturally by the FDTD method. Our numerical simulations predict that the pump threshold plateaus around cavity lengths greater than N =9 , which we identify as a consequence of the complex spatial dynamics and gain coupling from the inhomogeneous QD ensemble. This behavior is not expected from simple rate-equation analysis commonly adopted in the literature, but is in qualitative agreement with recent experiments. Single-mode to multimode lasing is also observed, depending on the spectral peak frequency of the QD ensemble. Using a statistical modal analysis of the average decay rates, we also

  1. Context-specific metabolic networks are consistent with experiments.

    Directory of Open Access Journals (Sweden)

    Scott A Becker

    2008-05-01

    Full Text Available Reconstructions of cellular metabolism are publicly available for a variety of different microorganisms and some mammalian genomes. To date, these reconstructions are "genome-scale" and strive to include all reactions implied by the genome annotation, as well as those with direct experimental evidence. Clearly, many of the reactions in a genome-scale reconstruction will not be active under particular conditions or in a particular cell type. Methods to tailor these comprehensive genome-scale reconstructions into context-specific networks will aid predictive in silico modeling for a particular situation. We present a method called Gene Inactivity Moderated by Metabolism and Expression (GIMME to achieve this goal. The GIMME algorithm uses quantitative gene expression data and one or more presupposed metabolic objectives to produce the context-specific reconstruction that is most consistent with the available data. Furthermore, the algorithm provides a quantitative inconsistency score indicating how consistent a set of gene expression data is with a particular metabolic objective. We show that this algorithm produces results consistent with biological experiments and intuition for adaptive evolution of bacteria, rational design of metabolic engineering strains, and human skeletal muscle cells. This work represents progress towards producing constraint-based models of metabolism that are specific to the conditions where the expression profiling data is available.

  2. Comparison of squashing and self-consistent input-output models of quantum feedback

    Science.gov (United States)

    Peřinová, V.; Lukš, A.; Křepelka, J.

    2018-03-01

    The paper (Yanagisawa and Hope, 2010) opens with two ways of analysis of a measurement-based quantum feedback. The scheme of the feedback includes, along with the homodyne detector, a modulator and a beamsplitter, which does not enable one to extract the nonclassical field. In the present scheme, the beamsplitter is replaced by the quantum noise evader, which makes it possible to extract the nonclassical field. We re-approach the comparison of two models related to the same scheme. The first one admits that in the feedback loop between the photon annihilation and creation operators, unusual commutation relations hold. As a consequence, in the feedback loop, squashing of the light occurs. In the second one, the description arrives at the feedback loop via unitary transformations. But it is obvious that the unitary transformation which describes the modulator changes even the annihilation operator of the mode which passes by the modulator which is not natural. The first model could be called "squashing model" and the second one could be named "self-consistent model". Although the predictions of the two models differ only a little and both the ways of analysis have their advantages, they have also their drawbacks and further investigation is possible.

  3. A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2017-01-01

    Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...... of parameters was performed in order to get a good model fit to the data. However, not all parameters are identifiable with the given data set and model structure. Sensitivity, identifiability, and uncertainty analysis were completed and a relevant identifiable subset of parameters was determined for a new...

  4. Structural Consistency, Consistency, and Sequential Rationality.

    OpenAIRE

    Kreps, David M; Ramey, Garey

    1987-01-01

    Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

  5. Are water simulation models consistent with steady-state and ultrafast vibrational spectroscopy experiments?

    International Nuclear Information System (INIS)

    Schmidt, J.R.; Roberts, S.T.; Loparo, J.J.; Tokmakoff, A.; Fayer, M.D.; Skinner, J.L.

    2007-01-01

    Vibrational spectroscopy can provide important information about structure and dynamics in liquids. In the case of liquid water, this is particularly true for isotopically dilute HOD/D 2 O and HOD/H 2 O systems. Infrared and Raman line shapes for these systems were measured some time ago. Very recently, ultrafast three-pulse vibrational echo experiments have been performed on these systems, which provide new, exciting, and important dynamical benchmarks for liquid water. There has been tremendous theoretical effort expended on the development of classical simulation models for liquid water. These models have been parameterized from experimental structural and thermodynamic measurements. The goal of this paper is to determine if representative simulation models are consistent with steady-state, and especially with these new ultrafast, experiments. Such a comparison provides information about the accuracy of the dynamics of these simulation models. We perform this comparison using theoretical methods developed in previous papers, and calculate the experimental observables directly, without making the Condon and cumulant approximations, and taking into account molecular rotation, vibrational relaxation, and finite excitation pulses. On the whole, the simulation models do remarkably well; perhaps the best overall agreement with experiment comes from the SPC/E model

  6. Toward a consistent RHA-RPA

    International Nuclear Information System (INIS)

    Shepard, J.R.

    1991-01-01

    The authors examine the RPA based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the 1-loop level. They emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e, e') quasi-elastic response. They also study the effect of imposing a 3-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m*/m. The cutoff is much less important than consistency in the description of low-lying collective levels. The cutoff model provides excellent agreement with quasi-elastic (e, e') data

  7. A Murine Model of Candida glabrata Vaginitis Shows No Evidence of an Inflammatory Immunopathogenic Response.

    Directory of Open Access Journals (Sweden)

    Evelyn E Nash

    Full Text Available Candida glabrata is the second most common organism isolated from women with vulvovaginal candidiasis (VVC, particularly in women with uncontrolled diabetes mellitus. However, mechanisms involved in the pathogenesis of C. glabrata-associated VVC are unknown and have not been studied at any depth in animal models. The objective of this study was to evaluate host responses to infection following efforts to optimize a murine model of C. glabrata VVC. For this, various designs were evaluated for consistent experimental vaginal colonization (i.e., type 1 and type 2 diabetic mice, exogenous estrogen, varying inocula, and co-infection with C. albicans. Upon model optimization, vaginal fungal burden and polymorphonuclear neutrophil (PMN recruitment were assessed longitudinally over 21 days post-inoculation, together with vaginal concentrations of IL-1β, S100A8 alarmin, lactate dehydrogenase (LDH, and in vivo biofilm formation. Consistent and sustained vaginal colonization with C. glabrata was achieved in estrogenized streptozotocin-induced type 1 diabetic mice. Vaginal PMN infiltration was consistently low, with IL-1β, S100A8, and LDH concentrations similar to uninoculated mice. Biofilm formation was not detected in vivo, and co-infection with C. albicans did not induce synergistic immunopathogenic effects. This data suggests that experimental vaginal colonization of C. glabrata is not associated with an inflammatory immunopathogenic response or biofilm formation.

  8. A Murine Model of Candida glabrata Vaginitis Shows No Evidence of an Inflammatory Immunopathogenic Response.

    Science.gov (United States)

    Nash, Evelyn E; Peters, Brian M; Lilly, Elizabeth A; Noverr, Mairi C; Fidel, Paul L

    2016-01-01

    Candida glabrata is the second most common organism isolated from women with vulvovaginal candidiasis (VVC), particularly in women with uncontrolled diabetes mellitus. However, mechanisms involved in the pathogenesis of C. glabrata-associated VVC are unknown and have not been studied at any depth in animal models. The objective of this study was to evaluate host responses to infection following efforts to optimize a murine model of C. glabrata VVC. For this, various designs were evaluated for consistent experimental vaginal colonization (i.e., type 1 and type 2 diabetic mice, exogenous estrogen, varying inocula, and co-infection with C. albicans). Upon model optimization, vaginal fungal burden and polymorphonuclear neutrophil (PMN) recruitment were assessed longitudinally over 21 days post-inoculation, together with vaginal concentrations of IL-1β, S100A8 alarmin, lactate dehydrogenase (LDH), and in vivo biofilm formation. Consistent and sustained vaginal colonization with C. glabrata was achieved in estrogenized streptozotocin-induced type 1 diabetic mice. Vaginal PMN infiltration was consistently low, with IL-1β, S100A8, and LDH concentrations similar to uninoculated mice. Biofilm formation was not detected in vivo, and co-infection with C. albicans did not induce synergistic immunopathogenic effects. This data suggests that experimental vaginal colonization of C. glabrata is not associated with an inflammatory immunopathogenic response or biofilm formation.

  9. Self-consistent approximation for muffin-tin models of random substitutional alloys with environmental disorder

    International Nuclear Information System (INIS)

    Kaplan, T.; Gray, L.J.

    1984-01-01

    The self-consistent approximation of Kaplan, Leath, Gray, and Diehl is applied to models for substitutional random alloys with muffin-tin potentials. The particular advantage of this approximation is that, in addition to including cluster scattering, the muffin-tin potentials in the alloy can depend on the occupation of the surrounding sites (i.e., environmental disorder is included)

  10. Pedagogical Approaches Used by Faculty in Holland's Model Environments: The Role of Environmental Consistency

    Science.gov (United States)

    Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.

    2009-01-01

    This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…

  11. Characterisation of poly(lactic acid): poly(ethyleneoxide) (PLA:PEG) nanoparticles using the self-consistent theory modelling approach

    NARCIS (Netherlands)

    Heald, C.R.; Stolnik, S.; Matteis, De C.; Garnett, M.C.; Illum, L.; Davis, S.S.; Leermakers, F.A.M.

    2003-01-01

    Self-consistent field (SCF) modelling studies can be used to predict the properties of poly(lactic acid):poly(ethyleneoxide) (PLA:PEG) nanoparticles using the theory developed by Scheutjens and Fleer. Good agreement in the results between experimental and modelled data has been observed previously

  12. Statistically Consistent k-mer Methods for Phylogenetic Tree Reconstruction.

    Science.gov (United States)

    Allman, Elizabeth S; Rhodes, John A; Sullivant, Seth

    2017-02-01

    Frequencies of k-mers in sequences are sometimes used as a basis for inferring phylogenetic trees without first obtaining a multiple sequence alignment. We show that a standard approach of using the squared Euclidean distance between k-mer vectors to approximate a tree metric can be statistically inconsistent. To remedy this, we derive model-based distance corrections for orthologous sequences without gaps, which lead to consistent tree inference. The identifiability of model parameters from k-mer frequencies is also studied. Finally, we report simulations showing that the corrected distance outperforms many other k-mer methods, even when sequences are generated with an insertion and deletion process. These results have implications for multiple sequence alignment as well since k-mer methods are usually the first step in constructing a guide tree for such algorithms.

  13. Functional connectivity modeling of consistent cortico-striatal degeneration in Huntington's disease

    Directory of Open Access Journals (Sweden)

    Imis Dogan

    2015-01-01

    Full Text Available Huntington's disease (HD is a progressive neurodegenerative disorder characterized by a complex neuropsychiatric phenotype. In a recent meta-analysis we identified core regions of consistent neurodegeneration in premanifest HD in the striatum and middle occipital gyrus (MOG. For early manifest HD convergent evidence of atrophy was most prominent in the striatum, motor cortex (M1 and inferior frontal junction (IFJ. The aim of the present study was to functionally characterize this topography of brain atrophy and to investigate differential connectivity patterns formed by consistent cortico-striatal atrophy regions in HD. Using areas of striatal and cortical atrophy at different disease stages as seeds, we performed task-free resting-state and task-based meta-analytic connectivity modeling (MACM. MACM utilizes the large data source of the BrainMap database and identifies significant areas of above-chance co-activation with the seed-region via the activation-likelihood-estimation approach. In order to delineate functional networks formed by cortical as well as striatal atrophy regions we computed the conjunction between the co-activation profiles of striatal and cortical seeds in the premanifest and manifest stages of HD, respectively. Functional characterization of the seeds was obtained using the behavioral meta-data of BrainMap. Cortico-striatal atrophy seeds of the premanifest stage of HD showed common co-activation with a rather cognitive network including the striatum, anterior insula, lateral prefrontal, premotor, supplementary motor and parietal regions. A similar but more pronounced co-activation pattern, additionally including the medial prefrontal cortex and thalamic nuclei was found with striatal and IFJ seeds at the manifest HD stage. The striatum and M1 were functionally connected mainly to premotor and sensorimotor areas, posterior insula, putamen and thalamus. Behavioral characterization of the seeds confirmed that experiments

  14. Consistent post-reaction vibrational energy redistribution in DSMC simulations using TCE model

    Science.gov (United States)

    Borges Sebastião, Israel; Alexeenko, Alina

    2016-10-01

    The direct simulation Monte Carlo (DSMC) method has been widely applied to study shockwaves, hypersonic reentry flows, and other nonequilibrium flow phenomena. Although there is currently active research on high-fidelity models based on ab initio data, the total collision energy (TCE) and Larsen-Borgnakke (LB) models remain the most often used chemistry and relaxation models in DSMC simulations, respectively. The conventional implementation of the discrete LB model, however, may not satisfy detailed balance when recombination and exchange reactions play an important role in the flow energy balance. This issue can become even more critical in reacting mixtures involving polyatomic molecules, such as in combustion. In this work, this important shortcoming is addressed and an empirical approach to consistently specify the post-reaction vibrational states close to thermochemical equilibrium conditions is proposed within the TCE framework. Following Bird's quantum-kinetic (QK) methodology for populating post-reaction states, the new TCE-based approach involves two main steps. The state-specific TCE reaction probabilities for a forward reaction are first pre-computed from equilibrium 0-D simulations. These probabilities are then employed to populate the post-reaction vibrational states of the corresponding reverse reaction. The new approach is illustrated by application to exchange and recombination reactions relevant to H2-O2 combustion processes.

  15. Study of impurity effects on CFETR steady-state scenario by self-consistent integrated modeling

    Science.gov (United States)

    Shi, Nan; Chan, Vincent S.; Jian, Xiang; Li, Guoqiang; Chen, Jiale; Gao, Xiang; Shi, Shengyu; Kong, Defeng; Liu, Xiaoju; Mao, Shifeng; Xu, Guoliang

    2017-12-01

    Impurity effects on fusion performance of China fusion engineering test reactor (CFETR) due to extrinsic seeding are investigated. An integrated 1.5D modeling workflow evolves plasma equilibrium and all transport channels to steady state. The one modeling framework for integrated tasks framework is used to couple the transport solver, MHD equilibrium solver, and source and sink calculations. A self-consistent impurity profile constructed using a steady-state background plasma, which satisfies quasi-neutrality and true steady state, is presented for the first time. Studies are performed based on an optimized fully non-inductive scenario with varying concentrations of Argon (Ar) seeding. It is found that fusion performance improves before dropping off with increasing {{Z}\\text{eff}} , while the confinement remains at high level. Further analysis of transport for these plasmas shows that low-k ion temperature gradient modes dominate the turbulence. The decrease in linear growth rate and resultant fluxes of all channels with increasing {{Z}\\text{eff}} can be traced to impurity profile change by transport. The improvement in confinement levels off at higher {{Z}\\text{eff}} . Over the regime of study there is a competition between the suppressed transport and increasing radiation that leads to a peak in the fusion performance at {{Z}\\text{eff}} (~2.78 for CFETR). Extrinsic impurity seeding to control divertor heat load will need to be optimized around this value for best fusion performance.

  16. Bootstrap consistency for general semiparametric M-estimation

    KAUST Repository

    Cheng, Guang

    2010-10-01

    Consider M-estimation in a semiparametric model that is characterized by a Euclidean parameter of interest and an infinite-dimensional nuisance parameter. As a general purpose approach to statistical inferences, the bootstrap has found wide applications in semiparametric M-estimation and, because of its simplicity, provides an attractive alternative to the inference approach based on the asymptotic distribution theory. The purpose of this paper is to provide theoretical justifications for the use of bootstrap as a semiparametric inferential tool. We show that, under general conditions, the bootstrap is asymptotically consistent in estimating the distribution of the M-estimate of Euclidean parameter; that is, the bootstrap distribution asymptotically imitates the distribution of the M-estimate. We also show that the bootstrap confidence set has the asymptotically correct coverage probability. These general onclusions hold, in particular, when the nuisance parameter is not estimable at root-n rate, and apply to a broad class of bootstrap methods with exchangeable ootstrap weights. This paper provides a first general theoretical study of the bootstrap in semiparametric models. © Institute of Mathematical Statistics, 2010.

  17. Self-consistent theory of hadron-nucleus scattering. Application to pion physics

    International Nuclear Information System (INIS)

    Johnson, M.B.

    1980-01-01

    The requirement of using self-consistent amplitudes to evaluate microscopically the scattering of strongly interacting particles from nuclei is developed. Application of the idea to a simple model of pion-nucleus scattering is made. Numerical results indicate that the expansion of the optical potential converges when evaluated in terms of fully self-consistent quantities. A comparison of the results to a recent determination of the spreading interaction in the phenomenological isobar-hole model shows that the theory accounts for the sign and magnitude of the real and imaginary part of the spreading interaction with no adjusted parameters. The self-consistnt theory has a strong density dependence, and the consequences of this for pion-nucleus scattering are discussed. 18 figures, 1 table

  18. The Functional Segregation and Integration Model: Mixture Model Representations of Consistent and Variable Group-Level Connectivity in fMRI

    DEFF Research Database (Denmark)

    Churchill, Nathan William; Madsen, Kristoffer Hougaard; Mørup, Morten

    2016-01-01

    flexibility: they only estimate segregated structure and do not model interregional functional connectivity, nor do they account for network variability across voxels or between subjects. To address these issues, this letter develops the functional segregation and integration model (FSIM). This extension......The brain consists of specialized cortical regions that exchange information between each other, reflecting a combination of segregated (local) and integrated (distributed) processes that define brain function. Functional magnetic resonance imaging (fMRI) is widely used to characterize...... brain regions where network expression predicts subject age in the experimental data. Thus, the FSIM is effective at summarizing functional connectivity structure in group-level fMRI, with applications in modeling the relationships between network variability and behavioral/demographic variables....

  19. Process of Integrating Screening and Detailed Risk-based Modeling Analyses to Ensure Consistent and Scientifically Defensible Results

    International Nuclear Information System (INIS)

    Buck, John W.; McDonald, John P.; Taira, Randal Y.

    2002-01-01

    To support cleanup and closure of these tanks, modeling is performed to understand and predict potential impacts to human health and the environment. Pacific Northwest National Laboratory developed a screening tool for the United States Department of Energy, Office of River Protection that estimates the long-term human health risk, from a strategic planning perspective, posed by potential tank releases to the environment. This tool is being conditioned to more detailed model analyses to ensure consistency between studies and to provide scientific defensibility. Once the conditioning is complete, the system will be used to screen alternative cleanup and closure strategies. The integration of screening and detailed models provides consistent analyses, efficiencies in resources, and positive feedback between the various modeling groups. This approach of conditioning a screening methodology to more detailed analyses provides decision-makers with timely and defensible information and increases confidence in the results on the part of clients, regulators, and stakeholders

  20. Numerical investigation of degas performance on impeller of medium-consistency pump

    Directory of Open Access Journals (Sweden)

    Hong Li

    2015-12-01

    Full Text Available Medium-consistency technology is known as the process with high efficiency and low pollution. The gas distribution was simulated in the medium-consistency pump with different degas hole positions. Rheological behaviors of pulp suspension were obtained by experimental test. A modified Herschel–Bulkley model and the Eulerian gas–liquid two-phase flow model were utilized to approximately represent the behaviors of the medium-consistency pulp suspension. The results show that when the relative position is 0.53, the gas volume ratio is less than 0.1% at the pump outlet and 9.8% at the vacuum inlet, and the pump head is at the maximum. Because of the different numbers of the impeller blades and turbulence blades and the asymmetric volute structure, the gas is distributed unevenly in the impeller. In addition, the pump performance was tested in experiment and the results are used to validate computational fluid dynamics outcomes.

  1. Posterior consistency for Bayesian inverse problems through stability and regression results

    International Nuclear Information System (INIS)

    Vollmer, Sebastian J

    2013-01-01

    In the Bayesian approach, the a priori knowledge about the input of a mathematical model is described via a probability measure. The joint distribution of the unknown input and the data is then conditioned, using Bayes’ formula, giving rise to the posterior distribution on the unknown input. In this setting we prove posterior consistency for nonlinear inverse problems: a sequence of data is considered, with diminishing fluctuations around a single truth and it is then of interest to show that the resulting sequence of posterior measures arising from this sequence of data concentrates around the truth used to generate the data. Posterior consistency justifies the use of the Bayesian approach very much in the same way as error bounds and convergence results for regularization techniques do. As a guiding example, we consider the inverse problem of reconstructing the diffusion coefficient from noisy observations of the solution to an elliptic PDE in divergence form. This problem is approached by splitting the forward operator into the underlying continuum model and a simpler observation operator based on the output of the model. In general, these splittings allow us to conclude posterior consistency provided a deterministic stability result for the underlying inverse problem and a posterior consistency result for the Bayesian regression problem with the push-forward prior. Moreover, we prove posterior consistency for the Bayesian regression problem based on the regularity, the tail behaviour and the small ball probabilities of the prior. (paper)

  2. Empirical phylogenies and species abundance distributions are consistent with pre-equilibrium dynamics of neutral community models with gene flow

    KAUST Repository

    Bonnet-Lebrun, Anne-Sophie

    2017-03-17

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.

  3. Empirical phylogenies and species abundance distributions are consistent with pre-equilibrium dynamics of neutral community models with gene flow

    KAUST Repository

    Bonnet-Lebrun, Anne-Sophie; Manica, Andrea; Eriksson, Anders; Rodrigues, Ana S.L.

    2017-01-01

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.

  4. Thermodynamically self-consistent integral equations and the structure of liquid metals

    International Nuclear Information System (INIS)

    Pastore, G.; Kahl, G.

    1987-01-01

    We discuss the application of the new thermodynamically self-consistent integral equations for the determination of the structural properties of liquid metals. We present a detailed comparison of the structure (S(q) and g(r)) for models of liquid alkali metals as obtained from two thermodynamically self-consistent integral equations and some published exact computer simulation results; the range of states extends from the triple point to the expanded metal. The theories which only impose thermodynamic self-consistency without any fitting of external data show an excellent agreement with the simulation results, thus demonstrating that this new type of integral equation is definitely superior to the conventional ones (hypernetted chain, Percus-Yevick, mean spherical approximation, etc). (author)

  5. Visualizing Three-dimensional Slab Geometries with ShowEarthModel

    Science.gov (United States)

    Chang, B.; Jadamec, M. A.; Fischer, K. M.; Kreylos, O.; Yikilmaz, M. B.

    2017-12-01

    Seismic data that characterize the morphology of modern subducted slabs on Earth suggest that a two-dimensional paradigm is no longer adequate to describe the subduction process. Here we demonstrate the effect of data exploration of three-dimensional (3D) global slab geometries with the open source program ShowEarthModel. ShowEarthModel was designed specifically to support data exploration, by focusing on interactivity and real-time response using the Vrui toolkit. Sixteen movies are presented that explore the 3D complexity of modern subduction zones on Earth. The first movie provides a guided tour through the Earth's major subduction zones, comparing the global slab geometry data sets of Gudmundsson and Sambridge (1998), Syracuse and Abers (2006), and Hayes et al. (2012). Fifteen regional movies explore the individual subduction zones and regions intersecting slabs, using the Hayes et al. (2012) slab geometry models where available and the Engdahl and Villasenor (2002) global earthquake data set. Viewing the subduction zones in this way provides an improved conceptualization of the 3D morphology within a given subduction zone as well as the 3D spatial relations between the intersecting slabs. This approach provides a powerful tool for rendering earth properties and broadening capabilities in both Earth Science research and education by allowing for whole earth visualization. The 3D characterization of global slab geometries is placed in the context of 3D slab-driven mantle flow and observations of shear wave splitting in subduction zones. These visualizations contribute to the paradigm shift from a 2D to 3D subduction framework by facilitating the conceptualization of the modern subduction system on Earth in 3D space.

  6. First results of GERDA Phase II and consistency with background models

    Science.gov (United States)

    Agostini, M.; Allardt, M.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Baudis, L.; Bauer, C.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode1, T.; Borowicz, D.; Brudanin, V.; Brugnera, R.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; D'Andrea, V.; Demidova, E. V.; Di Marco, N.; Domula, A.; Doroshkevich, E.; Egorov, V.; Falkenstein, R.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Gooch, C.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Hakenmüller, J.; Hegai, A.; Heisel, M.; Hemmer, S.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Janicskó Csáthy, J.; Jochum, J.; Junker, M.; Kazalov, V.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Kish, A.; Klimenko, A.; Kneißl, R.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Majorovits, B.; Maneschg, W.; Medinaceli, E.; Miloradovic, M.; Mingazheva, R.; Misiaszek, M.; Moseev, P.; Nemchenok, I.; Palioselitis, D.; Panas, K.; Pandola, L.; Pelczar, K.; Pullia, A.; Riboldi, S.; Rumyantseva, N.; Sada, C.; Salamida, F.; Salathe, M.; Schmitt, C.; Schneider, B.; Schönert, S.; Schreiner, J.; Schulz, O.; Schütz, A.-K.; Schwingenheuer, B.; Selivanenko, O.; Shevzik, E.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Vanhoefer, L.; Vasenko, A. A.; Veresnikova, A.; von Sturm, K.; Wagner, V.; Wegmann, A.; Wester, T.; Wiesinger, C.; Wojcik, M.; Yanovich, E.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.

    2017-01-01

    The GERDA (GERmanium Detector Array) is an experiment for the search of neutrinoless double beta decay (0νββ) in 76Ge, located at Laboratori Nazionali del Gran Sasso of INFN (Italy). GERDA operates bare high purity germanium detectors submersed in liquid Argon (LAr). Phase II of data-taking started in Dec 2015 and is currently ongoing. In Phase II 35 kg of germanium detectors enriched in 76Ge including thirty newly produced Broad Energy Germanium (BEGe) detectors is operating to reach an exposure of 100 kg·yr within about 3 years data taking. The design goal of Phase II is to reduce the background by one order of magnitude to get the sensitivity for T1/20ν = O≤ft( {{{10}26}} \\right){{ yr}}. To achieve the necessary background reduction, the setup was complemented with LAr veto. Analysis of the background spectrum of Phase II demonstrates consistency with the background models. Furthermore 226Ra and 232Th contamination levels consistent with screening results. In the first Phase II data release we found no hint for a 0νββ decay signal and place a limit of this process T1/20ν > 5.3 \\cdot {1025} yr (90% C.L., sensitivity 4.0·1025 yr). First results of GERDA Phase II will be presented.

  7. Single-field consistency relations of large scale structure part III: test of the equivalence principle

    Energy Technology Data Exchange (ETDEWEB)

    Creminelli, Paolo [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, Trieste, 34151 (Italy); Gleyzes, Jérôme; Vernizzi, Filippo [CEA, Institut de Physique Théorique, Gif-sur-Yvette cédex, F-91191 France (France); Hui, Lam [Physics Department and Institute for Strings, Cosmology and Astroparticle Physics, Columbia University, New York, NY, 10027 (United States); Simonović, Marko, E-mail: creminel@ictp.it, E-mail: jerome.gleyzes@cea.fr, E-mail: lhui@astro.columbia.edu, E-mail: msimonov@sissa.it, E-mail: filippo.vernizzi@cea.fr [SISSA, via Bonomea 265, Trieste, 34136 (Italy)

    2014-06-01

    The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a very tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.

  8. Consistent Alignment of World Embedding Models

    Science.gov (United States)

    2017-03-02

    propose a solution that aligns variations of the same model (or different models) in a joint low-dimensional la- tent space leveraging carefully...representations of linguistic enti- ties, most often referred to as embeddings. This includes techniques that rely on matrix factoriza- tion (Levy & Goldberg ...higher, the variation is much higher as well. As we increase the size of the neighborhood, or improve the quality of our sample by only picking the most

  9. Context-dependent individual behavioral consistency in Daphnia

    DEFF Research Database (Denmark)

    Heuschele, Jan; Ekvall, Mikael T.; Bianco, Giuseppe

    2017-01-01

    The understanding of consistent individual differences in behavior, often termed "personality," for adapting and coping with threats and novel environmental conditions has advanced considerably during the last decade. However, advancements are almost exclusively associated with higher-order animals......, whereas studies focusing on smaller aquatic organisms are still rare. Here, we show individual differences in the swimming behavior of Daphnia magna, a clonal freshwater invertebrate, before, during, and after being exposed to a lethal threat, ultraviolet radiation (UVR). We show consistency in swimming...... that of adults. Overall, we show that aquatic invertebrates are far from being identical robots, but instead they show considerable individual differences in behavior that can be attributed to both ontogenetic development and individual consistency. Our study also demonstrates, for the first time...

  10. Self consistent solution of the tJ model in the overdoped regime

    Science.gov (United States)

    Shastry, B. Sriram; Hansen, Daniel

    2013-03-01

    Detailed results from a recent microscopic theory of extremely correlated Fermi liquids, applied to the t-J model in two dimensions, are presented. The theory is to second order in a parameter λ, and is valid in the overdoped regime of the tJ model. The solution reported here is from Ref, where relevant equations given in Ref are self consistently solved for the square lattice. Thermodynamic variables and the resistivity are displayed at various densities and T for two sets of band parameters. The momentum distribution function and the renormalized electronic dispersion, its width and asymmetry are reported along principal directions of the zone. The optical conductivity is calculated. The electronic spectral function A (k , ω) probed in ARPES, is detailed with different elastic scattering parameters to account for the distinction between LASER and synchrotron ARPES. A high (binding) energy waterfall feature, sensitively dependent on the band hopping parameter t' is noted. This work was supported by DOE under Grant No. FG02-06ER46319.

  11. A Time-Dependent Λ and G Cosmological Model Consistent with Cosmological Constraints

    Directory of Open Access Journals (Sweden)

    L. Kantha

    2016-01-01

    Full Text Available The prevailing constant Λ-G cosmological model agrees with observational evidence including the observed red shift, Big Bang Nucleosynthesis (BBN, and the current rate of acceleration. It assumes that matter contributes 27% to the current density of the universe, with the rest (73% coming from dark energy represented by the Einstein cosmological parameter Λ in the governing Friedmann-Robertson-Walker equations, derived from Einstein’s equations of general relativity. However, the principal problem is the extremely small value of the cosmological parameter (~10−52 m2. Moreover, the dark energy density represented by Λ is presumed to have remained unchanged as the universe expanded by 26 orders of magnitude. Attempts to overcome this deficiency often invoke a variable Λ-G model. Cosmic constraints from action principles require that either both G and Λ remain time-invariant or both vary in time. Here, we propose a variable Λ-G cosmological model consistent with the latest red shift data, the current acceleration rate, and BBN, provided the split between matter and dark energy is 18% and 82%. Λ decreases (Λ~τ-2, where τ is the normalized cosmic time and G increases (G~τn with cosmic time. The model results depend only on the chosen value of Λ at present and in the far future and not directly on G.

  12. Plasma and BIAS Modeling: Self-Consistent Electrostatic Particle-in-Cell with Low-Density Argon Plasma for TiC

    Directory of Open Access Journals (Sweden)

    Jürgen Geiser

    2011-01-01

    processes. In this paper we present a new model taken into account a self-consistent electrostatic-particle in cell model with low density Argon plasma. The collision model are based of Monte Carlo simulations is discussed for DC sputtering in lower pressure regimes. In order to simulate transport phenomena within sputtering processes realistically, a spatial and temporal knowledge of the plasma density and electrostatic field configuration is needed. Due to relatively low plasma densities, continuum fluid equations are not applicable. We propose instead a Particle-in-cell (PIC method, which allows the study of plasma behavior by computing the trajectories of finite-size particles under the action of an external and self-consistent electric field defined in a grid of points.

  13. Consistency relation in power law G-inflation

    International Nuclear Information System (INIS)

    Unnikrishnan, Sanil; Shankaranarayanan, S.

    2014-01-01

    In the standard inflationary scenario based on a minimally coupled scalar field, canonical or non-canonical, the subluminal propagation of speed of scalar perturbations ensures the following consistency relation: r ≤ −8n T , where r is the tensor-to-scalar-ratio and n T is the spectral index for tensor perturbations. However, recently, it has been demonstrated that this consistency relation could be violated in Galilean inflation models even in the absence of superluminal propagation of scalar perturbations. It is therefore interesting to investigate whether the subluminal propagation of scalar field perturbations impose any bound on the ratio r/|n T | in G-inflation models. In this paper, we derive the consistency relation for a class of G-inflation models that lead to power law inflation. Within these class of models, it turns out that one can have r > −8n T or r ≤ −8n T depending on the model parameters. However, the subluminal propagation of speed of scalar field perturbations, as required by causality, restricts r ≤ −(32/3) n T

  14. Consistency and Communication in Committees

    OpenAIRE

    Inga Deimen; Felix Ketelaar; Mark T. Le Quement

    2013-01-01

    This paper analyzes truthtelling incentives in pre-vote communication in heterogeneous committees. We generalize the classical Condorcet jury model by introducing a new informational structure that captures consistency of information. In contrast to the impossibility result shown by Coughlan (2000) for the classical model, full pooling of information followed by sincere voting is an equilibrium outcome of our model for a large set of parameter values implying the possibility of ex post confli...

  15. Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers

    DEFF Research Database (Denmark)

    Cartar, William; Mørk, Jesper; Hughes, Stephen

    2017-01-01

    -level emitters are solved numerically. Phenomenological pure dephasing and incoherent pumping is added to the optical Bloch equations to allow for a dynamical lasing regime, but the cavity-mediated radiative dynamics and gain coupling of each QD dipole (artificial atom) is contained self-consistently within......-mode to multimode lasing is also observed, depending on the spectral peak frequency of the QD ensemble. Using a statistical modal analysis of the average decay rates, we also show how the average radiative decay rate decreases as a function of cavity size. In addition, we investigate the role of structural disorder...

  16. Consistent framework data for modeling and formation of scenarios in the Federal Environment Office; Konsistente Rahmendaten fuer Modellierungen und Szenariobildung im Umweltbundesamt

    Energy Technology Data Exchange (ETDEWEB)

    Weimer-Jehle, Wolfgang; Wassermann, Sandra; Kosow, Hannah [Internationales Zentrum fuer Kultur- und Technikforschung an der Univ. Stuttgart (Germany). ZIRN Interdisziplinaerer Forschungsschwerpunkt Risiko und Nachhaltige Technikentwicklung

    2011-04-15

    Model-based environmental scenarios normally require multiple framework assumptions regarding future social, political and economic developments (external developments). In most cases these framework assumptions are highly uncertain. Furthermore, different external developments are not isolated from each other and their interdependences can be described by qualitative judgments only. If the internal consistency of framework assumptions is not methodologically addressed, environmental models risk to be based on inconsistent combinations of framework assumptions which do not reflect existing relations between the respective factors in an appropriate way. This report aims at demonstrating how consistent context scenarios can be developed with the help of the cross-impact balance analysis (CIB). This method allows not only for the internal consistency of framework assumptions of a single model but also for the overall consistency of framework assumptions of modeling instruments, supporting the integrated interpretation of the results of different models. In order to demonstrate the method, in a first step, ten common framework assumptions were chosen and their possible future developments until 2030 were described. In a second step, a qualitative impact network was developed based on expert elicitation. The impact network provided the basis for a qualitative but systematic analysis of the internal consistency of combinations of framework assumptions. This analysis was carried out with the CIB-method and resulted in a set of consistent context scenarios. These scenarios can be used as an informative background for defining framework assumptions for environmental models at the UBA. (orig.)

  17. Self-consistent modeling of plasma response to impurity spreading from intense localized source

    International Nuclear Information System (INIS)

    Koltunov, Mikhail

    2012-07-01

    Non-hydrogen impurities unavoidably exist in hot plasmas of present fusion devices. They enter it intrinsically, due to plasma interaction with the wall of vacuum vessel, as well as are seeded for various purposes deliberately. Normally, the spots where injected particles enter the plasma are much smaller than its total surface. Under such conditions one has to expect a significant modification of local plasma parameters through various physical mechanisms, which, in turn, affect the impurity spreading. Self-consistent modeling of interaction between impurity and plasma is, therefore, not possible with linear approaches. A model based on the fluid description of electrons, main and impurity ions, and taking into account the plasma quasi-neutrality, Coulomb collisions of background and impurity charged particles, radiation losses, particle transport to bounding surfaces, is elaborated in this work. To describe the impurity spreading and the plasma response self-consistently, fluid equations for the particle, momentum and energy balances of various plasma components are solved by reducing them to ordinary differential equations for the time evolution of several parameters characterizing the solution in principal details: the magnitudes of plasma density and plasma temperatures in the regions of impurity localization and the spatial scales of these regions. The results of calculations for plasma conditions typical in tokamak experiments with impurity injection are presented. A new mechanism for the condensation phenomenon and formation of cold dense plasma structures is proposed.

  18. Food pattern modeling shows that the 2010 Dietary Guidelines for sodium and potassium cannot be met simultaneously

    Science.gov (United States)

    Maillot, Matthieu; Monsivais, Pablo; Drewnowski, Adam

    2013-01-01

    The 2010 US Dietary Guidelines recommended limiting intake of sodium to 1500 mg/d for people older than 50 years, African Americans, and those suffering from chronic disease. The guidelines recommended that all other people consume less than 2300 mg sodium and 4700 mg of potassium per day. The theoretical feasibility of meeting the sodium and potassium guidelines while simultaneously maintaining nutritional adequacy of the diet was tested using food pattern modeling based on linear programming. Dietary data from the National Health and Nutrition Examination Survey 2001-2002 were used to create optimized food patterns for 6 age-sex groups. Linear programming models determined the boundary conditions for the potassium and sodium content of the modeled food patterns that would also be compatible with other nutrient goals. Linear programming models also sought to determine the amounts of sodium and potassium that both would be consistent with the ratio of Na to K of 0.49 and would cause the least deviation from the existing food habits. The 6 sets of food patterns were created before and after an across-the-board 10% reduction in sodium content of all foods in the Food and Nutrition Database for Dietary Studies. Modeling analyses showed that the 2010 Dietary Guidelines for sodium were incompatible with potassium guidelines and with nutritionally adequate diets, even after reducing the sodium content of all US foods by 10%. Feasibility studies should precede or accompany the issuing of dietary guidelines to the public. PMID:23507224

  19. Is the thermal-spike model consistent with experimentally determined electron temperature?

    International Nuclear Information System (INIS)

    Ajryan, Eh.A.; Fedorov, A.V.; Kostenko, B.F.

    2000-01-01

    Carbon K-Auger electron spectra from amorphous carbon foils induced by fast heavy ions are theoretically investigated. The high-energy tail of the Auger structure showing a clear projectile charge dependence is analyzed within the thermal-spike model framework as well as in the frame of another model taking into account some kinetic features of the process. A poor comparison results between theoretically and experimentally determined temperatures are suggested to be due to an improper account of double electron excitations or due to shake-up processes which leave the system in a more energetic initial state than a statically screened core hole

  20. String consistency for unified model building

    International Nuclear Information System (INIS)

    Chaudhuri, S.; Chung, S.W.; Hockney, G.; Lykken, J.

    1995-01-01

    We explore the use of real fermionization as a test case for understanding how specific features of phenomenological interest in the low-energy effective superpotential are realized in exact solutions to heterotic superstring theory. We present pedagogic examples of models which realize SO(10) as a level two current algebra on the world-sheet, and discuss in general how higher level current algebras can be realized in the tensor product of simple constituent conformal field theories. We describe formal developments necessary to compute couplings in models built using real fermionization. This allows us to isolate cases of spin structures where the standard prescription for real fermionization may break down. (orig.)

  1. Model for ICRF fast wave current drive in self-consistent MHD equilibria

    International Nuclear Information System (INIS)

    Bonoli, P.T.; Englade, R.C.; Porkolab, M.; Fenstermacher, M.E.

    1993-01-01

    Recently, a model for fast wave current drive in the ion cyclotron radio frequency (ICRF) range was incorporated into the current drive and MHD equilibrium code ACCOME. The ACCOME model combines a free boundary solution of the Grad Shafranov equation with the calculation of driven currents due to neutral beam injection, lower hybrid (LH) waves, bootstrap effects, and ICRF fast waves. The equilibrium and current drive packages iterate between each other to obtain an MHD equilibrium which is consistent with the profiles of driven current density. The ICRF current drive package combines a toroidal full-wave code (FISIC) with a parameterization of the current drive efficiency obtained from an adjoint solution of the Fokker Planck equation. The electron absorption calculation in the full-wave code properly accounts for the combined effects of electron Landau damping (ELD) and transit time magnetic pumping (TTMP), assuming a Maxwellian (or bi-Maxwellian) electron distribution function. Furthermore, the current drive efficiency includes the effects of particle trapping, momentum conserving corrections to the background Fokker Planck collision operator, and toroidally induced variations in the parallel wavenumbers of the injected ICRF waves. This model has been used to carry out detailed studies of advanced physics scenarios in the proposed Tokamak Physics Experiment (TPX). Results are shown, for example, which demonstrate the possibility of achieving stable equilibria at high beta and high bootstrap current fraction in TPX. Model results are also shown for the proposed ITER device

  2. Student Effort, Consistency, and Online Performance

    Science.gov (United States)

    Patron, Hilde; Lopez, Salvador

    2011-01-01

    This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas…

  3. Direct detection of WIMPs: implications of a self-consistent truncated isothermal model of the Milky Way's dark matter halo

    Science.gov (United States)

    Chaudhury, Soumini; Bhattacharjee, Pijushpani; Cowsik, Ramanath

    2010-09-01

    Direct detection of Weakly Interacting Massive Particle (WIMP) candidates of Dark Matter (DM) is studied within the context of a self-consistent truncated isothermal model of the finite-size dark halo of the Galaxy. The halo model, based on the ``King model'' of the phase space distribution function of collisionless DM particles, takes into account the modifications of the phase-space structure of the halo due to the gravitational influence of the observed visible matter in a self-consistent manner. The parameters of the halo model are determined by a fit to a recently determined circular rotation curve of the Galaxy that extends up to ~ 60 kpc. Unlike in the Standard Halo Model (SHM) customarily used in the analysis of the results of WIMP direct detection experiments, the velocity distribution of the WIMPs in our model is non-Maxwellian with a cut-off at a maximum velocity that is self-consistently determined by the model itself. For our halo model that provides the best fit to the rotation curve data, the 90% C.L. upper limit on the WIMP-nucleon spin-independent cross section from the recent results of the CDMS-II experiment, for example, is ~ 5.3 × 10-8 pb at a WIMP mass of ~ 71 GeV. We also find, using the original 2-bin annual modulation amplitude data on the nuclear recoil event rate seen in the DAMA experiment, that there exists a range of small WIMP masses, typically ~ 2-16 GeV, within which DAMA collaboration's claimed annual modulation signal purportedly due to WIMPs is compatible with the null results of other experiments. These results, based as they are on a self-consistent model of the dark matter halo of the Galaxy, strengthen the possibility of low-mass (lsim10 GeV) WIMPs as a candidate for dark matter as indicated by several earlier studies performed within the context of the SHM. A more rigorous analysis using DAMA bins over smaller intervals should be able to better constrain the ``DAMA regions'' in the WIMP parameter space within the context of

  4. Modeling of LH current drive in self-consistent elongated tokamak MHD equilibria

    International Nuclear Information System (INIS)

    Blackfield, D.T.; Devoto, R.S.; Fenstermacher, M.E.; Bonoli, P.T.; Porkolab, M.; Yugo, J.

    1989-01-01

    Calculations of non-inductive current drive typically have been used with model MHD equilibria which are independently generated from an assumed toroidal current profile or from a fit to an experiment. Such a method can lead to serious errors since the driven current can dramatically alter the equilibrium and changes in the equilibrium B-fields can dramatically alter the current drive. The latter effect is quite pronounced in LH current drive where the ray trajectories are sensitive to the local values of the magnetic shear and the density gradient. In order to overcome these problems, we have modified a LH simulation code to accommodate elongated plasmas with numerically generated equilibria. The new LH module has been added to the ACCOME code which solves for current drive by neutral beams, electric fields, and bootstrap effects in a self-consistent 2-D equilibrium. We briefly describe the model in the next section and then present results of a study of LH current drive in ITER. 2 refs., 6 figs., 2 tabs

  5. Consistent thermodynamic properties of lipids systems

    DEFF Research Database (Denmark)

    Cunico, Larissa; Ceriani, Roberta; Sarup, Bent

    different pressures, with azeotrope behavior observed. Available thermodynamic consistency tests for TPx data were applied before performing parameter regressions for Wilson, NRTL, UNIQUAC and original UNIFAC models. The relevance of enlarging experimental databank of lipids systems data in order to improve......Physical and thermodynamic properties of pure components and their mixtures are the basic requirement for process design, simulation, and optimization. In the case of lipids, our previous works[1-3] have indicated a lack of experimental data for pure components and also for their mixtures...... the performance of predictive thermodynamic models was confirmed in this work by analyzing the calculated values of original UNIFAC model. For solid-liquid equilibrium (SLE) data, new consistency tests have been developed [2]. Some of the developed tests were based in the quality tests proposed for VLE data...

  6. Multi-Time Scale Model Order Reduction and Stability Consistency Certification of Inverter-Interfaced DG System in AC Microgrid

    Directory of Open Access Journals (Sweden)

    Xiaoxiao Meng

    2018-01-01

    Full Text Available AC microgrid mainly comprise inverter-interfaced distributed generators (IIDGs, which are nonlinear complex systems with multiple time scales, including frequency control, time delay measurements, and electromagnetic transients. The droop control-based IIDG in an AC microgrid is selected as the research object in this study, which comprises power droop controller, voltage- and current-loop controllers, and filter and line. The multi-time scale characteristics of the detailed IIDG model are divided based on singular perturbation theory. In addition, the IIDG model order is reduced by neglecting the system fast dynamics. The static and transient stability consistency of the IIDG model order reduction are demonstrated by extracting features of the IIDG small signal model and using the quadratic approximation method of the stability region boundary, respectively. The dynamic response consistencies of the IIDG model order reduction are evaluated using the frequency, damping and amplitude features extracted by the Prony transformation. Results are applicable to provide a simplified model for the dynamic characteristic analysis of IIDG systems in AC microgrid. The accuracy of the proposed method is verified by using the eigenvalue comparison, the transient stability index comparison and the dynamic time-domain simulation.

  7. Investigating the consistency between proxy-based reconstructions and climate models using data assimilation: a mid-Holocene case study

    NARCIS (Netherlands)

    A. Mairesse; H. Goosse; P. Mathiot; H. Wanner; S. Dubinkina (Svetlana)

    2013-01-01

    htmlabstractThe mid-Holocene (6 kyr BP; thousand years before present) is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of

  8. Consistent Valuation across Curves Using Pricing Kernels

    Directory of Open Access Journals (Sweden)

    Andrea Macrina

    2018-03-01

    Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.

  9. Models of alien species richness show moderate predictive accuracy and poor transferability

    Directory of Open Access Journals (Sweden)

    César Capinha

    2018-06-01

    Full Text Available Robust predictions of alien species richness are useful to assess global biodiversity change. Nevertheless, the capacity to predict spatial patterns of alien species richness remains largely unassessed. Using 22 data sets of alien species richness from diverse taxonomic groups and covering various parts of the world, we evaluated whether different statistical models were able to provide useful predictions of absolute and relative alien species richness, as a function of explanatory variables representing geographical, environmental and socio-economic factors. Five state-of-the-art count data modelling techniques were used and compared: Poisson and negative binomial generalised linear models (GLMs, multivariate adaptive regression splines (MARS, random forests (RF and boosted regression trees (BRT. We found that predictions of absolute alien species richness had a low to moderate accuracy in the region where the models were developed and a consistently poor accuracy in new regions. Predictions of relative richness performed in a superior manner in both geographical settings, but still were not good. Flexible tree ensembles-type techniques (RF and BRT were shown to be significantly better in modelling alien species richness than parametric linear models (such as GLM, despite the latter being more commonly applied for this purpose. Importantly, the poor spatial transferability of models also warrants caution in assuming the generality of the relationships they identify, e.g. by applying projections under future scenario conditions. Ultimately, our results strongly suggest that predictability of spatial variation in richness of alien species richness is limited. The somewhat more robust ability to rank regions according to the number of aliens they have (i.e. relative richness, suggests that models of aliens species richness may be useful for prioritising and comparing regions, but not for predicting exact species numbers.

  10. Self-consistency and coherent effects in nonlinear resonances

    International Nuclear Information System (INIS)

    Hofmann, I.; Franchetti, G.; Qiang, J.; Ryne, R. D.

    2003-01-01

    The influence of space charge on emittance growth is studied in simulations of a coasting beam exposed to a strong octupolar perturbation in an otherwise linear lattice, and under stationary parameters. We explore the importance of self-consistency by comparing results with a non-self-consistent model, where the space charge electric field is kept 'frozen-in' to its initial values. For Gaussian distribution functions we find that the 'frozen-in' model results in a good approximation of the self-consistent model, hence coherent response is practically absent and the emittance growth is self-limiting due to space charge de-tuning. For KV or waterbag distributions, instead, strong coherent response is found, which we explain in terms of absence of Landau damping

  11. Modeling of the 3RS tau protein with self-consistent field method and Monte Carlo simulation

    NARCIS (Netherlands)

    Leermakers, F.A.M.; Jho, Y.S.; Zhulina, E.B.

    2010-01-01

    Using a model with amino acid resolution of the 196 aa N-terminus of the 3RS tau protein, we performed both a Monte Carlo study and a complementary self-consistent field (SCF) analysis to obtain detailed information on conformational properties of these moieties near a charged plane (mimicking the

  12. Toward a consistent model for strain accrual and release for the New Madrid Seismic Zone, central United States

    Science.gov (United States)

    Hough, S.E.; Page, M.

    2011-01-01

    At the heart of the conundrum of seismogenesis in the New Madrid Seismic Zone is the apparently substantial discrepancy between low strain rate and high recent seismic moment release. In this study we revisit the magnitudes of the four principal 1811–1812 earthquakes using intensity values determined from individual assessments from four experts. Using these values and the grid search method of Bakun and Wentworth (1997), we estimate magnitudes around 7.0 for all four events, values that are significantly lower than previously published magnitude estimates based on macroseismic intensities. We further show that the strain rate predicted from postglacial rebound is sufficient to produce a sequence with the moment release of one Mmax6.8 every 500 years, a rate that is much lower than previous estimates of late Holocene moment release. However, Mw6.8 is at the low end of the uncertainty range inferred from analysis of intensities for the largest 1811–1812 event. We show that Mw6.8 is also a reasonable value for the largest main shock given a plausible rupture scenario. One can also construct a range of consistent models that permit a somewhat higher Mmax, with a longer average recurrence rate. It is thus possible to reconcile predicted strain and seismic moment release rates with alternative models: one in which 1811–1812 sequences occur every 500 years, with the largest events being Mmax∼6.8, or one in which sequences occur, on average, less frequently, with Mmax of ∼7.0. Both models predict that the late Holocene rate of activity will continue for the next few to 10 thousand years.

  13. A Preliminary Comparison of Motor Learning Across Different Non-invasive Brain Stimulation Paradigms Shows No Consistent Modulations

    Directory of Open Access Journals (Sweden)

    Virginia Lopez-Alonso

    2018-04-01

    Full Text Available Non-invasive brain stimulation (NIBS has been widely explored as a way to safely modulate brain activity and alter human performance for nearly three decades. Research using NIBS has grown exponentially within the last decade with promising results across a variety of clinical and healthy populations. However, recent work has shown high inter-individual variability and a lack of reproducibility of previous results. Here, we conducted a small preliminary study to explore the effects of three of the most commonly used excitatory NIBS paradigms over the primary motor cortex (M1 on motor learning (Sequential Visuomotor Isometric Pinch Force Tracking Task and secondarily relate changes in motor learning to changes in cortical excitability (MEP amplitude and SICI. We compared anodal transcranial direct current stimulation (tDCS, paired associative stimulation (PAS25, and intermittent theta burst stimulation (iTBS, along with a sham tDCS control condition. Stimulation was applied prior to motor learning. Participants (n = 28 were randomized into one of the four groups and were trained on a skilled motor task. Motor learning was measured immediately after training (online, 1 day after training (consolidation, and 1 week after training (retention. We did not find consistent differential effects on motor learning or cortical excitability across groups. Within the boundaries of our small sample sizes, we then assessed effect sizes across the NIBS groups that could help power future studies. These results, which require replication with larger samples, are consistent with previous reports of small and variable effect sizes of these interventions on motor learning.

  14. A Preliminary Comparison of Motor Learning Across Different Non-invasive Brain Stimulation Paradigms Shows No Consistent Modulations

    Science.gov (United States)

    Lopez-Alonso, Virginia; Liew, Sook-Lei; Fernández del Olmo, Miguel; Cheeran, Binith; Sandrini, Marco; Abe, Mitsunari; Cohen, Leonardo G.

    2018-01-01

    Non-invasive brain stimulation (NIBS) has been widely explored as a way to safely modulate brain activity and alter human performance for nearly three decades. Research using NIBS has grown exponentially within the last decade with promising results across a variety of clinical and healthy populations. However, recent work has shown high inter-individual variability and a lack of reproducibility of previous results. Here, we conducted a small preliminary study to explore the effects of three of the most commonly used excitatory NIBS paradigms over the primary motor cortex (M1) on motor learning (Sequential Visuomotor Isometric Pinch Force Tracking Task) and secondarily relate changes in motor learning to changes in cortical excitability (MEP amplitude and SICI). We compared anodal transcranial direct current stimulation (tDCS), paired associative stimulation (PAS25), and intermittent theta burst stimulation (iTBS), along with a sham tDCS control condition. Stimulation was applied prior to motor learning. Participants (n = 28) were randomized into one of the four groups and were trained on a skilled motor task. Motor learning was measured immediately after training (online), 1 day after training (consolidation), and 1 week after training (retention). We did not find consistent differential effects on motor learning or cortical excitability across groups. Within the boundaries of our small sample sizes, we then assessed effect sizes across the NIBS groups that could help power future studies. These results, which require replication with larger samples, are consistent with previous reports of small and variable effect sizes of these interventions on motor learning. PMID:29740271

  15. Comprehensive and fully self-consistent modeling of modern semiconductor lasers

    International Nuclear Information System (INIS)

    Nakwaski, W.; Sarzał, R. P.

    2016-01-01

    The fully self-consistent model of modern semiconductor lasers used to design their advanced structures and to understand more deeply their properties is given in the present paper. Operation of semiconductor lasers depends not only on many optical, electrical, thermal, recombination, and sometimes mechanical phenomena taking place within their volumes but also on numerous mutual interactions between these phenomena. Their experimental investigation is quite complex, mostly because of miniature device sizes. Therefore, the most convenient and exact method to analyze expected laser operation and to determine laser optimal structures for various applications is to examine the details of their performance with the aid of a simulation of laser operation in various considered conditions. Such a simulation of an operation of semiconductor lasers is presented in this paper in a full complexity of all mutual interactions between the above individual physical processes. In particular, the hole-burning effect has been discussed. The impacts on laser performance introduced by oxide apertures (their sizes and localization) have been analyzed in detail. Also, some important details concerning the operation of various types of semiconductor lasers are discussed. The results of some applications of semiconductor lasers are shown for successive laser structures. (paper)

  16. Self-consistent Non-LTE Model of Infrared Molecular Emissions and Oxygen Dayglows in the Mesosphere and Lower Thermosphere

    Science.gov (United States)

    Feofilov, Artem G.; Yankovsky, Valentine A.; Pesnell, William D.; Kutepov, Alexander A.; Goldberg, Richard A.; Mauilova, Rada O.

    2007-01-01

    We present the new version of the ALI-ARMS (for Accelerated Lambda Iterations for Atmospheric Radiation and Molecular Spectra) model. The model allows simultaneous self-consistent calculating the non-LTE populations of the electronic-vibrational levels of the O3 and O2 photolysis products and vibrational level populations of CO2, N2,O2, O3, H2O, CO and other molecules with detailed accounting for the variety of the electronic-vibrational, vibrational-vibrational and vibrational-translational energy exchange processes. The model was used as the reference one for modeling the O2 dayglows and infrared molecular emissions for self-consistent diagnostics of the multi-channel space observations of MLT in the SABER experiment It also allows reevaluating the thermalization efficiency of the absorbed solar ultraviolet energy and infrared radiative cooling/heating of MLT by detailed accounting of the electronic-vibrational relaxation of excited photolysis products via the complex chain of collisional energy conversion processes down to the vibrational energy of optically active trace gas molecules.

  17. Self-consistent radial sheath

    International Nuclear Information System (INIS)

    Hazeltine, R.D.

    1988-12-01

    The boundary layer arising in the radial vicinity of a tokamak limiter is examined, with special reference to the TEXT tokamak. It is shown that sheath structure depends upon the self-consistent effects of ion guiding-center orbit modification, as well as the radial variation of E /times/ B-induced toroidal rotation. Reasonable agreement with experiment is obtained from an idealized model which, however simplified, preserves such self-consistent effects. It is argued that the radial sheath, which occurs whenever confining magnetic field-lines lie in the plasma boundary surface, is an object of some intrinsic interest. It differs from the more familiar axial sheath because magnetized charges respond very differently to parallel and perpendicular electric fields. 11 refs., 1 fig

  18. An Ice Model That is Consistent with Composite Rheology in GIA Modelling

    Science.gov (United States)

    Huang, P.; Patrick, W.

    2017-12-01

    There are several popular approaches in constructing ice history models. One of them is mainly based on thermo-mechanical ice models with forcing or boundary conditions inferred from paleoclimate data. The second one is mainly based on the observed response of the Earth to glacial loading and unloading, a process called Glacial Isostatic Adjustment or GIA. The third approach is a hybrid version of the first and second approaches. In this presentation, we will follow the second approach which also uses geological data such as ice flow, terminal moraine data and simple ice dynamic for the ice sheet re-construction (Peltier & Andrew 1976). The global ice model ICE-6G (Peltier et al. 2015) and all its predecessors (Tushingham & Peltier 1991, Peltier 1994, 1996, 2004, Lambeck et al. 2014) are constructed this way with the assumption that mantle rheology is linear. However, high temperature creep experiments on mantle rocks show that non-linear creep laws can also operate in the mantle. Since both linear (e.g. diffusion creep) and non-linear (e.g. dislocation) creep laws can operate simultaneously in the mantle, mantle rheology is likely composite, where the total creep is the sum of both linear and onlinear creep. Preliminary GIA studies found that composite rheology can fit regional RSL observations better than that from linear rheology(e.g. van der Wal et al. 2010). The aim of this paper is to construct ice models in Laurentia and Fennoscandia using this second approach, but with composite rheology, so that its predictions can fit GIA observations such as global RSL data, land uplift rate and g-dot simultaneously in addition to geological data and simple ice dynamics. The g-dot or gravity-rate-of-change data is from the GRACE gravity mission but with the effects of hydrology removed. Our GIA model is based on the Coupled Laplace-Finite Element method as described in Wu(2004) and van der Wal et al.(2010). It is found that composite rheology generally supports a thicker

  19. Childhood central adiposity at ages 5 and 9 shows consistent relationship with that of the maternal grandmother but not other grandparents.

    Science.gov (United States)

    Somerville, R; Khalil, H; Segurado, R; Mehegan, J; Viljoen, K; Heinen, M; Murrin, C; Kelleher, C C

    2018-05-09

    The importance of a life course approach to childhood obesity has been emphasized; however, few studies can prospectively investigate relationships in three-generation families. To prospectively investigate the relationship between grandparental and grandchild waist circumference (WC) at ages 5 and 9 down maternal and paternal lines. At baseline in the Lifeways Cross-Generation Cohort, 1094 children were born to 1082 mothers; 585 were examined at age 5 and 298 at age 9. Of the total 589 children with measured WC, data were also available from 745 grandparents. Child WC was standardized for age and sex, and theory-based hierarchical linear regression was used. Maternal grandmother (MGM) WC was predictive of grandchild WC at both time points. At age 5, grandchild's standardized birth weight (B = 0.266, p = 0.001), mother's means tested eligibility for free medical care (B = 1.029, p = 0.001) and grandchild seeing maternal grandparents daily (B = 0.312, p = 0.048) were significant alongside MGM WC (B = 0.015, p = 0.019). At age 9, only MGM WC (B = 0.022, p = 0.033) and mother's WC (B = 0.032, p = 0.005) were significant. Mediation analysis with mother's WC showed significant direct relationship of MGM and grandchild WC. This prospective cross-generational cohort shows consistent patterns of association between MGM and grandchild WC, not seen in other grandparental lineages. © 2018 World Obesity Federation.

  20. Net Rotation of the Lithosphere in Mantle Convection Models with Self-consistent Plate Generation

    Science.gov (United States)

    Gerault, M.; Coltice, N.

    2017-12-01

    Lateral variations in the viscosity structure of the lithosphere and the mantle give rise to a discordant motion between the two. In a deep mantle reference frame, this motion is called the net rotation of the lithosphere. Plate motion reconstructions, mantle flow computations, and inferences from seismic anisotropy all indicate some amount of net rotation using different mantle reference frames. While the direction of rotation is somewhat consistent across studies, the predicted amplitudes range from 0.1 deg/Myr to 0.3 deg/Myr at the present-day. How net rotation rates could have differed in the past is also a subject of debate and strong geodynamic arguments are missing from the discussion. This study provides the first net rotation calculations in 3-D spherical mantle convection models with self-consistent plate generation. We run the computations for billions of years of numerical integration. We look into how sensitive the net rotation is to major tectonic events, such as subduction initiation, continental breakup and plate reorganisations, and whether some governing principles from the models could guide plate motion reconstructions. The mantle convection problem is solved with the finite volume code StagYY using a visco-pseudo-plastic rheology. Mantle flow velocities are solely driven by buoyancy forces internal to the system, with free slip upper and lower boundary conditions. We investigate how the yield stress, the mantle viscosity structure and the properties of continents affect the net rotation over time. Models with large lateral viscosity variations from continents predict net rotations that are at least threefold faster than those without continents. Models where continents cover a third of the surface produce net rotation rates that vary from nearly zero to over 0.3 deg/Myr with rapide increase during continental breakup. The pole of rotation appears to migrate along no particular path. For all models, regardless of the yield stress and the

  1. A time consistent risk averse three-stage stochastic mixed integer optimization model for power generation capacity expansion

    International Nuclear Information System (INIS)

    Pisciella, P.; Vespucci, M.T.; Bertocchi, M.; Zigrino, S.

    2016-01-01

    We propose a multi-stage stochastic optimization model for the generation capacity expansion problem of a price-taker power producer. Uncertainties regarding the evolution of electricity prices and fuel costs play a major role in long term investment decisions, therefore the objective function represents a trade-off between expected profit and risk. The Conditional Value at Risk is the risk measure used and is defined by a nested formulation that guarantees time consistency in the multi-stage model. The proposed model allows one to determine a long term expansion plan which takes into account uncertainty, while the LCoE approach, currently used by decision makers, only allows one to determine which technology should be chosen for the next power plant to be built. A sensitivity analysis is performed with respect to the risk weighting factor and budget amount. - Highlights: • We propose a time consistent risk averse multi-stage model for capacity expansion. • We introduce a case study with uncertainty on electricity prices and fuel costs. • Increased budget moves the investment from gas towards renewables and then coal. • Increased risk aversion moves the investment from coal towards renewables. • Time inconsistency leads to a profit gap between planned and implemented policies.

  2. Electron beam charging of insulators: A self-consistent flight-drift model

    International Nuclear Information System (INIS)

    Touzin, M.; Goeuriot, D.; Guerret-Piecourt, C.; Juve, D.; Treheux, D.; Fitting, H.-J.

    2006-01-01

    Electron beam irradiation and the self-consistent charge transport in bulk insulating samples are described by means of a new flight-drift model and an iterative computer simulation. Ballistic secondary electron and hole transport is followed by electron and hole drifts, their possible recombination and/or trapping in shallow and deep traps. The trap capture cross sections are the Poole-Frenkel-type temperature and field dependent. As a main result the spatial distributions of currents j(x,t), charges ρ(x,t), the field F(x,t), and the potential slope V(x,t) are obtained in a self-consistent procedure as well as the time-dependent secondary electron emission rate σ(t) and the surface potential V 0 (t). For bulk insulating samples the time-dependent distributions approach the final stationary state with j(x,t)=const=0 and σ=1. Especially for low electron beam energies E 0 G of a vacuum grid in front of the target surface. For high beam energies E 0 =10, 20, and 30 keV high negative surface potentials V 0 =-4, -14, and -24 kV are obtained, respectively. Besides open nonconductive samples also positive ion-covered samples and targets with a conducting and grounded layer (metal or carbon) on the surface have been considered as used in environmental scanning electron microscopy and common SEM in order to prevent charging. Indeed, the potential distributions V(x) are considerably small in magnitude and do not affect the incident electron beam neither by retarding field effects in front of the surface nor within the bulk insulating sample. Thus the spatial scattering and excitation distributions are almost not affected

  3. Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.

    Directory of Open Access Journals (Sweden)

    Isabela Rodrigues Nogueira Forti

    Full Text Available In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male and Brazil (400 female and 400 male to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (P<0.001. In Brazil, the most frequent eye colour brown was significantly less frequent than expected in comparison to the general Brazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour it is hard to modify, hide or disguise, and it is highly polymorphic.

  4. Personality consistency analysis in cloned quarantine dog candidates

    Directory of Open Access Journals (Sweden)

    Jin Choi

    2017-01-01

    Full Text Available In recent research, personality consistency has become an important characteristic. Diverse traits and human-animal interactions, in particular, are studied in the field of personality consistency in dogs. Here, we investigated the consistency of dominant behaviours in cloned and control groups followed by the modified Puppy Aptitude Test, which consists of ten subtests to ascertain the influence of genetic identity. In this test, puppies are exposed to stranger, restraint, prey-like object, noise, startling object, etc. Six cloned and four control puppies participated and the consistency of responses at ages 7–10 and 16 weeks in the two groups was compared. The two groups showed different consistencies in the subtests. While the average scores of the cloned group were consistent (P = 0.7991, those of the control group were not (P = 0.0089. Scores of Pack Drive and Fight or Flight Drive were consistent in the cloned group, however, those of the control group were not. Scores of Prey Drive were not consistent in either the cloned or the control group. Therefore, it is suggested that consistency of dominant behaviour is affected by genetic identity and some behaviours can be influenced more than others. Our results suggest that cloned dogs could show more consistent traits than non-cloned. This study implies that personality consistency could be one of the ways to analyse traits of puppies.

  5. LIDT-DD: A New Self-Consistent Debris Disc Model Including Radiation Pressure and Coupling Dynamical and Collisional Evolution

    Science.gov (United States)

    Kral, Q.; Thebault, P.; Charnoz, S.

    2014-01-01

    The first attempt at developing a fully self-consistent code coupling dynamics and collisions to study debris discs (Kral et al. 2013) is presented. So far, these two crucial mechanisms were studied separately, with N-body and statistical collisional codes respectively, because of stringent computational constraints. We present a new model named LIDT-DD which is able to follow over long timescales the coupled evolution of dynamics (including radiation forces) and collisions in a self-consistent way.

  6. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles

    Science.gov (United States)

    Roth, Jenny; Steffens, Melanie C.; Vignoles, Vivian L.

    2018-01-01

    The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance–congruity and imbalance–dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias. PMID:29681878

  7. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles

    Directory of Open Access Journals (Sweden)

    Jenny Roth

    2018-04-01

    Full Text Available The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility as associative connections. The model builds on two cognitive principles, balance–congruity and imbalance–dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification depends in part on the (incompatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (incompatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.

  8. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles.

    Science.gov (United States)

    Roth, Jenny; Steffens, Melanie C; Vignoles, Vivian L

    2018-01-01

    The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance-congruity and imbalance-dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.

  9. Conformal consistency relations for single-field inflation

    International Nuclear Information System (INIS)

    Creminelli, Paolo; Noreña, Jorge; Simonović, Marko

    2012-01-01

    We generalize the single-field consistency relations to capture not only the leading term in the squeezed limit — going as 1/q 3 , where q is the small wavevector — but also the subleading one, going as 1/q 2 . This term, for an (n+1)-point function, is fixed in terms of the variation of the n-point function under a special conformal transformation; this parallels the fact that the 1/q 3 term is related with the scale dependence of the n-point function. For the squeezed limit of the 3-point function, this conformal consistency relation implies that there are no terms going as 1/q 2 . We verify that the squeezed limit of the 4-point function is related to the conformal variation of the 3-point function both in the case of canonical slow-roll inflation and in models with reduced speed of sound. In the second case the conformal consistency conditions capture, at the level of observables, the relation among operators induced by the non-linear realization of Lorentz invariance in the Lagrangian. These results mean that, in any single-field model, primordial correlation functions of ζ are endowed with an SO(4,1) symmetry, with dilations and special conformal transformations non-linearly realized by ζ. We also verify the conformal consistency relations for any n-point function in models with a modulation of the inflaton potential, where the scale dependence is not negligible. Finally, we generalize (some of) the consistency relations involving tensors and soft internal momenta

  10. Replica consistency in a Data Grid

    International Nuclear Information System (INIS)

    Domenici, Andrea; Donno, Flavia; Pucciani, Gianni; Stockinger, Heinz; Stockinger, Kurt

    2004-01-01

    A Data Grid is a wide area computing infrastructure that employs Grid technologies to provide storage capacity and processing power to applications that handle very large quantities of data. Data Grids rely on data replication to achieve better performance and reliability by storing copies of data sets on different Grid nodes. When a data set can be modified by applications, the problem of maintaining consistency among existing copies arises. The consistency problem also concerns metadata, i.e., additional information about application data sets such as indices, directories, or catalogues. This kind of metadata is used both by the applications and by the Grid middleware to manage the data. For instance, the Replica Management Service (the Grid middleware component that controls data replication) uses catalogues to find the replicas of each data set. Such catalogues can also be replicated and their consistency is crucial to the correct operation of the Grid. Therefore, metadata consistency generally poses stricter requirements than data consistency. In this paper we report on the development of a Replica Consistency Service based on the middleware mainly developed by the European Data Grid Project. The paper summarises the main issues in the replica consistency problem, and lays out a high-level architectural design for a Replica Consistency Service. Finally, results from simulations of different consistency models are presented

  11. Sticky continuous processes have consistent price systems

    DEFF Research Database (Denmark)

    Bender, Christian; Pakkanen, Mikko; Sayit, Hasanjan

    Under proportional transaction costs, a price process is said to have a consistent price system, if there is a semimartingale with an equivalent martingale measure that evolves within the bid-ask spread. We show that a continuous, multi-asset price process has a consistent price system, under...

  12. Coupled Dyson-Schwinger equations and effects of self-consistency

    International Nuclear Information System (INIS)

    Wu, S.S.; Zhang, H.X.; Yao, Y.J.

    2001-01-01

    Using the σ-ω model as an effective tool, the effects of self-consistency are studied in some detail. A coupled set of Dyson-Schwinger equations for the renormalized baryon and meson propagators in the σ-ω model is solved self-consistently according to the dressed Hartree-Fock scheme, where the hadron propagators in both the baryon and meson self-energies are required to also satisfy this coupled set of equations. It is found that the self-consistency affects the baryon spectral function noticeably, if only the interaction with σ mesons is considered. However, there is a cancellation between the effects due to the σ and ω mesons and the additional contribution of ω mesons makes the above effect insignificant. In both the σ and σ-ω cases the effects of self-consistency on meson spectral function are perceptible, but they can nevertheless be taken account of without a self-consistent calculation. Our study indicates that to include the meson propagators in the self-consistency requirement is unnecessary and one can stop at an early step of an iteration procedure to obtain a good approximation to the fully self-consistent results of all the hadron propagators in the model, if an appropriate initial input is chosen. Vertex corrections and their effects on ghost poles are also studied

  13. Model of the synthesis of trisporic acid in Mucorales showing bistability.

    Science.gov (United States)

    Werner, S; Schroeter, A; Schimek, C; Vlaic, S; Wöstemeyer, J; Schuster, S

    2012-12-01

    An important substance in the signalling between individuals of Mucor-like fungi is trisporic acid (TA). This compound, together with some of its precursors, serves as a pheromone in mating between (+)- and (-)-mating types. Moreover, intermediates of the TA pathway are exchanged between the two mating partners. Based on differential equations, mathematical models of the synthesis pathways of TA in the two mating types of an idealised Mucor-fungus are here presented. These models include the positive feedback of TA on its own synthesis. The authors compare three sub-models in view of bistability, robustness and the reversibility of transitions. The proposed modelling study showed that, in a system where intermediates are exchanged, a reversible transition between the two stable steady states occurs, whereas an exchange of the end product leads to an irreversible transition. The reversible transition is physiologically favoured, because the high-production state of TA must come to an end eventually. Moreover, the exchange of intermediates and TA is compared with the 3-way handshake widely used by computers linked in a network.

  14. Exotic nuclei in self-consistent mean-field models

    International Nuclear Information System (INIS)

    Bender, M.; Rutz, K.; Buervenich, T.; Reinhard, P.-G.; Maruhn, J. A.; Greiner, W.

    1999-01-01

    We discuss two widely used nuclear mean-field models, the relativistic mean-field model and the (nonrelativistic) Skyrme-Hartree-Fock model, and their capability to describe exotic nuclei with emphasis on neutron-rich tin isotopes and superheavy nuclei. (c) 1999 American Institute of Physics

  15. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  16. Two Impossibility Results on the Converse Consistency Principle in Bargaining

    OpenAIRE

    Youngsub Chun

    1999-01-01

    We present two impossibility results on the converse consistency principle in the context of bargaining. First, we show that there is no solution satis-fying Pareto optimality, contraction independence, and converse consistency. Next, we show that there is no solution satisfying Pareto optimality, strong individual rationality, individual monotonicity, and converse consistency.

  17. Comparison of bootstrap current and plasma conductivity models applied in a self-consistent equilibrium calculation for Tokamak plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Maria Celia Ramos; Ludwig, Gerson Otto [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Lab. Associado de Plasma]. E-mail: mcr@plasma.inpe.br

    2004-07-01

    Different bootstrap current formulations are implemented in a self-consistent equilibrium calculation obtained from a direct variational technique in fixed boundary tokamak plasmas. The total plasma current profile is supposed to have contributions of the diamagnetic, Pfirsch-Schlueter, and the neoclassical Ohmic and bootstrap currents. The Ohmic component is calculated in terms of the neoclassical conductivity, compared here among different expressions, and the loop voltage determined consistently in order to give the prescribed value of the total plasma current. A comparison among several bootstrap current models for different viscosity coefficient calculations and distinct forms for the Coulomb collision operator is performed for a variety of plasma parameters of the small aspect ratio tokamak ETE (Experimento Tokamak Esferico) at the Associated Plasma Laboratory of INPE, in Brazil. We have performed this comparison for the ETE tokamak so that the differences among all the models reported here, mainly regarding plasma collisionality, can be better illustrated. The dependence of the bootstrap current ratio upon some plasma parameters in the frame of the self-consistent calculation is also analysed. We emphasize in this paper what we call the Hirshman-Sigmar/Shaing model, valid for all collisionality regimes and aspect ratios, and a fitted formulation proposed by Sauter, which has the same range of validity but is faster to compute than the previous one. The advantages or possible limitations of all these different formulations for the bootstrap current estimate are analysed throughout this work. (author)

  18. A SELF-CONSISTENT MODEL OF THE CIRCUMSTELLAR DEBRIS CREATED BY A GIANT HYPERVELOCITY IMPACT IN THE HD 172555 SYSTEM

    International Nuclear Information System (INIS)

    Johnson, B. C.; Melosh, H. J.; Lisse, C. M.; Chen, C. H.; Wyatt, M. C.; Thebault, P.; Henning, W. G.; Gaidos, E.; Elkins-Tanton, L. T.; Bridges, J. C.; Morlok, A.

    2012-01-01

    Spectral modeling of the large infrared excess in the Spitzer IRS spectra of HD 172555 suggests that there is more than 10 19 kg of submicron dust in the system. Using physical arguments and constraints from observations, we rule out the possibility of the infrared excess being created by a magma ocean planet or a circumplanetary disk or torus. We show that the infrared excess is consistent with a circumstellar debris disk or torus, located at ∼6 AU, that was created by a planetary scale hypervelocity impact. We find that radiation pressure should remove submicron dust from the debris disk in less than one year. However, the system's mid-infrared photometric flux, dominated by submicron grains, has been stable within 4% over the last 27 years, from the Infrared Astronomical Satellite (1983) to WISE (2010). Our new spectral modeling work and calculations of the radiation pressure on fine dust in HD 172555 provide a self-consistent explanation for this apparent contradiction. We also explore the unconfirmed claim that ∼10 47 molecules of SiO vapor are needed to explain an emission feature at ∼8 μm in the Spitzer IRS spectrum of HD 172555. We find that unless there are ∼10 48 atoms or 0.05 M ⊕ of atomic Si and O vapor in the system, SiO vapor should be destroyed by photo-dissociation in less than 0.2 years. We argue that a second plausible explanation for the ∼8 μm feature can be emission from solid SiO, which naturally occurs in submicron silicate ''smokes'' created by quickly condensing vaporized silicate.

  19. A relativistic self-consistent model for studying enhancement of space charge limited emission due to counter-streaming ions

    Science.gov (United States)

    Lin, M. C.; Verboncoeur, J.

    2016-10-01

    A maximum electron current transmitted through a planar diode gap is limited by space charge of electrons dwelling across the gap region, the so called space charge limited (SCL) emission. By introducing a counter-streaming ion flow to neutralize the electron charge density, the SCL emission can be dramatically raised, so electron current transmission gets enhanced. In this work, we have developed a relativistic self-consistent model for studying the enhancement of maximum transmission by a counter-streaming ion current. The maximum enhancement is found when the ion effect is saturated, as shown analytically. The solutions in non-relativistic, intermediate, and ultra-relativistic regimes are obtained and verified with 1-D particle-in-cell simulations. This self-consistent model is general and can also serve as a comparison for verification of simulation codes, as well as extension to higher dimensions.

  20. Consistency between Constructivist Profiles and Instructional Practices of Prospective Physics Teachers

    Directory of Open Access Journals (Sweden)

    Ozlem Ates

    2018-04-01

    Full Text Available This study aims to explain the extent to which prospective physics teachers’ views and practices are consistent with the constructivist framework. A case study design was employed as the research approach. The study was conducted with 11 prospective physics teachers attending a state university in Turkey. Data was collected through semi-structured interviews, observation notes and lesson plans. The interview guide consisted of questions which allowed the interviewer to probe participants’ views of constructivism based on 5E learning model. Such questions as “how do you plan your teaching?” (introducing new topics, continuing the lecture, types of questions to ask, evaluating students’ understanding etc. were included in the interview. Following the analysis of the interview data, participants’ profiles were classified into three categories: traditional, transition and constructivist under the dimensions “beginning of a lesson,” “learning process,” “learning environment” and “assessment.” Observations were carried out using an observation checklist consisting of 24 items based on 5E learning model. Another checklist developed by the researchers was used to evaluate participants’ teaching qualifications. Interview results showed that seven participants had transitional, three had constructivist and one had traditional views. However, none of the participants were observed to exhibit constructivist teaching styles. Moreover, observation and interview results were consistent only for six participants, indicating that almost half of the participants had difficulty putting their views into practice.

  1. An algebraic method for constructing stable and consistent autoregressive filters

    International Nuclear Information System (INIS)

    Harlim, John; Hong, Hoon; Robbins, Jacob L.

    2015-01-01

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides a discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern

  2. Consistent and robust determination of border ownership based on asymmetric surrounding contrast.

    Science.gov (United States)

    Sakai, Ko; Nishimura, Haruka; Shimizu, Ryohei; Kondo, Keiichi

    2012-09-01

    Determination of the figure region in an image is a fundamental step toward surface construction, shape coding, and object representation. Localized, asymmetric surround modulation, reported neurophysiologically in early-to-intermediate-level visual areas, has been proposed as a mechanism for figure-ground segregation. We investigated, computationally, whether such surround modulation is capable of yielding consistent and robust determination of figure side for various stimuli. Our surround modulation model showed a surprisingly high consistency among pseudorandom block stimuli, with greater consistency for stimuli that yielded higher accuracy of, and shorter reaction times in, human perception. Our analyses revealed that the localized, asymmetric organization of surrounds is crucial in the detection of the contrast imbalance that leads to the determination of the direction of figure with respect to the border. The model also exhibited robustness for gray-scaled natural images, with a mean correct rate of 67%, which was similar to that of figure-side determination in human perception through a small window and of machine-vision algorithms based on local processing. These results suggest a crucial role of surround modulation in the local processing of figure-ground segregation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Quasi-Particle Self-Consistent GW for Molecules.

    Science.gov (United States)

    Kaplan, F; Harding, M E; Seiler, C; Weigend, F; Evers, F; van Setten, M J

    2016-06-14

    We present the formalism and implementation of quasi-particle self-consistent GW (qsGW) and eigenvalue only quasi-particle self-consistent GW (evGW) adapted to standard quantum chemistry packages. Our implementation is benchmarked against high-level quantum chemistry computations (coupled-cluster theory) and experimental results using a representative set of molecules. Furthermore, we compare the qsGW approach for five molecules relevant for organic photovoltaics to self-consistent GW results (scGW) and analyze the effects of the self-consistency on the ground state density by comparing calculated dipole moments to their experimental values. We show that qsGW makes a significant improvement over conventional G0W0 and that partially self-consistent flavors (in particular evGW) can be excellent alternatives.

  4. A consistent multigroup model for radiative transfer and its underlying mean opacities

    International Nuclear Information System (INIS)

    Turpault, Rodolphe

    2005-01-01

    In some regimes, such as in plasma physics or in super orbital atmospheric entry of space objects, the effects of radiation are crucial and can tremendously modify the hydrodynamics of the gas. In such cases, it is therefore important to have a good prediction of the radiative variables. However, full transport solutions of these multi-dimensional, time-dependent problems are too expensive to get to be involved in a coupled configuration. It is hence necessary to develop other models for radiation that are cheap, yet accurate enough to give good predictions of the radiative effects. We will herein introduce the multigroup-M1 model and look at its characteristics and in particular try to separate the angular error from the frequential one since these two approximation play very different roles. The angular behaviour of the model will be tested on a case proposed by Su and Olson and used by Olson et al. to compare various moments and (flux-limited) diffusion models. For the frequency behaviour, we use a simplified flame test-case and show the importance of taking good mean opacities

  5. Self consistent MHD modeling of the solar wind from polar coronal holes

    International Nuclear Information System (INIS)

    Stewart, G. A.; Bravo, S.

    1996-01-01

    We have developed a 2D self consistent MHD model for solar wind flow from antisymmetric magnetic geometries. We present results in the case of a photospheric magnetic field which has a dipolar configuration, in order to investigate some of the general characteristics of the wind at solar minimum. As in previous studies, we find that the magnetic configuration is that of a closed field region (a coronal helmet belt) around the solar equator, extending up to about 1.6 R · , and two large open field regions centred over the poles (polar coronal holes), whose magnetic and plasma fluxes expand to fill both hemispheres in interplanetary space. In addition, we find that the different geometries of the magnetic field lines across each hole (from the almost radial central polar lines to the highly curved border equatorial lines) cause the solar wind to have greatly different properties depending on which region it flows from. We find that, even though our simplified model cannot produce realistic wind values, we can obtain a polar wind that is faster, less dense and hotter than equatorial wind, and found that, close to the Sun, there exists a sharp transition between the two wind types. As these characteristics coincide with observations we conclude that both fast and slow solar wind can originate from coronal holes, fast wind from the centre, slow wind from the border

  6. Consistent estimation of Gibbs energy using component contributions.

    Directory of Open Access Journals (Sweden)

    Elad Noor

    Full Text Available Standard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i.e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism.

  7. Glass consistency and glass performance

    International Nuclear Information System (INIS)

    Plodinec, M.J.; Ramsey, W.G.

    1994-01-01

    Glass produced by the Defense Waste Processing Facility (DWPF) will have to consistently be more durable than a benchmark glass (evaluated using a short-term leach test), with high confidence. The DWPF has developed a Glass Product Control Program to comply with this specification. However, it is not clear what relevance product consistency has on long-term glass performance. In this report, the authors show that DWPF glass, produced in compliance with this specification, can be expected to effectively limit the release of soluble radionuclides to natural environments. However, the release of insoluble radionuclides to the environment will be limited by their solubility, and not glass durability

  8. Development of a 3D consistent 1D neutronics model for reactor core simulation

    International Nuclear Information System (INIS)

    Lee, Ki Bog; Joo, Han Gyu; Cho, Byung Oh; Zee, Sung Quun

    2001-02-01

    In this report a 3D consistent 1D model based on nonlinear analytic nodal method is developed to reproduce the 3D results. During the derivation, the current conservation factor (CCF) is introduced which guarantees the same axial neutron currents obtained from the 1D equation as the 3D reference values. Furthermore in order to properly use 1D group constants, a new 1D group constants representation scheme employing tables for the fuel temperature, moderator density and boron concentration is developed and functionalized for the control rod tip position. To test the 1D kinetics model with CCF, several steady state and transient calculations were performed and compared with 3D reference values. The errors of K-eff values were reduced about one tenth when using CCF without significant computational overhead. And the errors of power distribution were decreased to the range of one fifth or tenth at steady state calculation. The 1D kinetics model with CCF and the 1D group constant functionalization employing tables as a function of control rod tip position can provide preciser results at the steady state and transient calculation. Thus it is expected that the 1D kinetics model derived in this report can be used in the safety analysis, reactor real time simulation coupled with system analysis code, operator support system etc.

  9. Using Trait-State Models to Evaluate the Longitudinal Consistency of Global Self-Esteem From Adolescence to Adulthood

    Science.gov (United States)

    Donnellan, M. Brent; Kenny, David A.; Trzesniewski, Kali H.; Lucas, Richard E.; Conger, Rand D.

    2012-01-01

    The present research used a latent variable trait-state model to evaluate the longitudinal consistency of self-esteem during the transition from adolescence to adulthood. Analyses were based on ten administrations of the Rosenberg Self-Esteem scale (Rosenberg, 1965) spanning the ages of approximately 13 to 32 for a sample of 451 participants. Results indicated that a completely stable trait factor and an autoregressive trait factor accounted for the majority of the variance in latent self-esteem assessments, whereas state factors accounted for about 16% of the variance in repeated assessments of latent self-esteem. The stability of individual differences in self-esteem increased with age consistent with the cumulative continuity principle of personality development. PMID:23180899

  10. Using Trait-State Models to Evaluate the Longitudinal Consistency of Global Self-Esteem From Adolescence to Adulthood.

    Science.gov (United States)

    Donnellan, M Brent; Kenny, David A; Trzesniewski, Kali H; Lucas, Richard E; Conger, Rand D

    2012-12-01

    The present research used a latent variable trait-state model to evaluate the longitudinal consistency of self-esteem during the transition from adolescence to adulthood. Analyses were based on ten administrations of the Rosenberg Self-Esteem scale (Rosenberg, 1965) spanning the ages of approximately 13 to 32 for a sample of 451 participants. Results indicated that a completely stable trait factor and an autoregressive trait factor accounted for the majority of the variance in latent self-esteem assessments, whereas state factors accounted for about 16% of the variance in repeated assessments of latent self-esteem. The stability of individual differences in self-esteem increased with age consistent with the cumulative continuity principle of personality development.

  11. Generalized contexts and consistent histories in quantum mechanics

    International Nuclear Information System (INIS)

    Losada, Marcelo; Laura, Roberto

    2014-01-01

    We analyze a restriction of the theory of consistent histories by imposing that a valid description of a physical system must include quantum histories which satisfy the consistency conditions for all states. We prove that these conditions are equivalent to imposing the compatibility conditions of our formalism of generalized contexts. Moreover, we show that the theory of consistent histories with the consistency conditions for all states and the formalism of generalized context are equally useful representing expressions which involve properties at different times

  12. Self-consistent calculation of atomic structure for mixture

    International Nuclear Information System (INIS)

    Meng Xujun; Bai Yun; Sun Yongsheng; Zhang Jinglin; Zong Xiaoping

    2000-01-01

    Based on relativistic Hartree-Fock-Slater self-consistent average atomic model, atomic structure for mixture is studied by summing up component volumes in mixture. Algorithmic procedure for solving both the group of Thomas-Fermi equations and the self-consistent atomic structure is presented in detail, and, some numerical results are discussed

  13. Geometry and time scales of self-consistent orbits in a modified SU(2) model

    International Nuclear Information System (INIS)

    Jezek, D.M.; Hernandez, E.S.; Solari, H.G.

    1986-01-01

    We investigate the time-dependent Hartree-Fock flow pattern of a two-level many fermion system interacting via a two-body interaction which does not preserve the parity symmetry of standard SU(2) models. The geometrical features of the time-dependent Hartree-Fock energy surface are analyzed and a phase instability is clearly recognized. The time evolution of one-body observables along self-consistent and exact trajectories are examined together with the overlaps between both orbits. Typical time scales for the determinantal motion can be set and the validity of the time-dependent Hartree-Fock approach in the various regions of quasispin phase space is discussed

  14. Colonic stem cell data are consistent with the immortal model of stem cell division under non-random strand segregation.

    Science.gov (United States)

    Walters, K

    2009-06-01

    Colonic stem cells are thought to reside towards the base of crypts of the colon, but their numbers and proliferation mechanisms are not well characterized. A defining property of stem cells is that they are able to divide asymmetrically, but it is not known whether they always divide asymmetrically (immortal model) or whether there are occasional symmetrical divisions (stochastic model). By measuring diversity of methylation patterns in colon crypt samples, a recent study found evidence in favour of the stochastic model, assuming random segregation of stem cell DNA strands during cell division. Here, the effect of preferential segregation of the template strand is considered to be consistent with the 'immortal strand hypothesis', and explore the effect on conclusions of previously published results. For a sample of crypts, it is shown how, under the immortal model, to calculate mean and variance of the number of unique methylation patterns allowing for non-random strand segregation and compare them with those observed. The calculated mean and variance are consistent with an immortal model that incorporates non-random strand segregation for a range of stem cell numbers and levels of preferential strand segregation. Allowing for preferential strand segregation considerably alters previously published conclusions relating to stem cell numbers and turnover mechanisms. Evidence in favour of the stochastic model may not be as strong as previously thought.

  15. Domain Adaptation for Pedestrian Detection Based on Prediction Consistency

    Directory of Open Access Journals (Sweden)

    Yu Li-ping

    2014-01-01

    Full Text Available Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene.

  16. Towards three-dimensional continuum models of self-consistent along-strike megathrust segmentation

    Science.gov (United States)

    Pranger, Casper; van Dinther, Ylona; May, Dave; Le Pourhiet, Laetitia; Gerya, Taras

    2016-04-01

    into one algorithm. We are working towards presenting the first benchmarked 3D dynamic rupture models as an important step towards seismic cycle modelling of megathrust segmentation in a three-dimensional subduction setting with slow tectonic loading, self consistent fault development, and spontaneous seismicity.

  17. A relativistic self-consistent model for studying enhancement of space charge limited field emission due to counter-streaming ions

    International Nuclear Information System (INIS)

    Lin, M. C.; Lu, P. S.; Chang, P. C.; Ragan-Kelley, B.; Verboncoeur, J. P.

    2014-01-01

    Recently, field emission has attracted increasing attention despite the practical limitation that field emitters operate below the Child-Langmuir space charge limit. By introducing counter-streaming ion flow to neutralize the electron charge density, the space charge limited field emission (SCLFE) current can be dramatically enhanced. In this work, we have developed a relativistic self-consistent model for studying the enhancement of SCLFE by a counter-streaming ion current. The maximum enhancement is found when the ion effect is saturated, as shown analytically. The solutions in non-relativistic, intermediate, and ultra-relativistic regimes are obtained and verified with 1-D particle-in-cell simulations. This self-consistent model is general and can also serve as a benchmark or comparison for verification of simulation codes, as well as extension to higher dimensions

  18. A Simulation Model for Drift Resistive Ballooning Turbulence Examining the Influence of Self-consistent Zonal Flows

    Science.gov (United States)

    Cohen, Bruce; Umansky, Maxim; Joseph, Ilon

    2015-11-01

    Progress is reported on including self-consistent zonal flows in simulations of drift-resistive ballooning turbulence using the BOUT + + framework. Previous published work addressed the simulation of L-mode edge turbulence in realistic single-null tokamak geometry using the BOUT three-dimensional fluid code that solves Braginskii-based fluid equations. The effects of imposed sheared ExB poloidal rotation were included, with a static radial electric field fitted to experimental data. In new work our goal is to include the self-consistent effects on the radial electric field driven by the microturbulence, which contributes to the sheared ExB poloidal rotation (zonal flow generation). We describe a model for including self-consistent zonal flows and an algorithm for maintaining underlying plasma profiles to enable the simulation of steady-state turbulence. We examine the role of Braginskii viscous forces in providing necessary dissipation when including axisymmetric perturbations. We also report on some of the numerical difficulties associated with including the axisymmetric component of the fluctuating fields. This work was performed under the auspices of the U.S. Department of Energy under contract DE-AC52-07NA27344 at the Lawrence Livermore National Laboratory (LLNL-ABS-674950).

  19. Geometrically Consistent Mesh Modification

    KAUST Repository

    Bonito, A.

    2010-01-01

    A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.

  20. Incorporating rapid neocortical learning of new schema-consistent information into complementary learning systems theory.

    Science.gov (United States)

    McClelland, James L

    2013-11-01

    The complementary learning systems theory of the roles of hippocampus and neocortex (McClelland, McNaughton, & O'Reilly, 1995) holds that the rapid integration of arbitrary new information into neocortical structures is avoided to prevent catastrophic interference with structured knowledge representations stored in synaptic connections among neocortical neurons. Recent studies (Tse et al., 2007, 2011) showed that neocortical circuits can rapidly acquire new associations that are consistent with prior knowledge. The findings challenge the complementary learning systems theory as previously presented. However, new simulations extending those reported in McClelland et al. (1995) show that new information that is consistent with knowledge previously acquired by a putatively cortexlike artificial neural network can be learned rapidly and without interfering with existing knowledge; it is when inconsistent new knowledge is acquired quickly that catastrophic interference ensues. Several important features of the findings of Tse et al. (2007, 2011) are captured in these simulations, indicating that the neural network model used in McClelland et al. has characteristics in common with neocortical learning mechanisms. An additional simulation generalizes beyond the network model previously used, showing how the rate of change of cortical connections can depend on prior knowledge in an arguably more biologically plausible network architecture. In sum, the findings of Tse et al. are fully consistent with the idea that hippocampus and neocortex are complementary learning systems. Taken together, these findings and the simulations reported here advance our knowledge by bringing out the role of consistency of new experience with existing knowledge and demonstrating that the rate of change of connections in real and artificial neural networks can be strongly prior-knowledge dependent. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  1. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    Science.gov (United States)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  2. A SELF-CONSISTENT MODEL OF THE CIRCUMSTELLAR DEBRIS CREATED BY A GIANT HYPERVELOCITY IMPACT IN THE HD 172555 SYSTEM

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, B. C.; Melosh, H. J. [Department of Physics, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907 (United States); Lisse, C. M. [JHU-APL, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States); Chen, C. H. [STScI, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Wyatt, M. C. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Thebault, P. [LESIA, Observatoire de Paris, F-92195 Meudon Principal Cedex (France); Henning, W. G. [NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States); Gaidos, E. [Department of Geology and Geophysics, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); Elkins-Tanton, L. T. [Department of Terrestrial Magnetism, Carnegie Institution for Science, Washington, DC 20015 (United States); Bridges, J. C. [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom); Morlok, A., E-mail: johns477@purdue.edu [Department of Physical Sciences, Open University, Walton Hall, Milton Keynes MK7 6AA (United Kingdom)

    2012-12-10

    Spectral modeling of the large infrared excess in the Spitzer IRS spectra of HD 172555 suggests that there is more than 10{sup 19} kg of submicron dust in the system. Using physical arguments and constraints from observations, we rule out the possibility of the infrared excess being created by a magma ocean planet or a circumplanetary disk or torus. We show that the infrared excess is consistent with a circumstellar debris disk or torus, located at {approx}6 AU, that was created by a planetary scale hypervelocity impact. We find that radiation pressure should remove submicron dust from the debris disk in less than one year. However, the system's mid-infrared photometric flux, dominated by submicron grains, has been stable within 4% over the last 27 years, from the Infrared Astronomical Satellite (1983) to WISE (2010). Our new spectral modeling work and calculations of the radiation pressure on fine dust in HD 172555 provide a self-consistent explanation for this apparent contradiction. We also explore the unconfirmed claim that {approx}10{sup 47} molecules of SiO vapor are needed to explain an emission feature at {approx}8 {mu}m in the Spitzer IRS spectrum of HD 172555. We find that unless there are {approx}10{sup 48} atoms or 0.05 M{sub Circled-Plus} of atomic Si and O vapor in the system, SiO vapor should be destroyed by photo-dissociation in less than 0.2 years. We argue that a second plausible explanation for the {approx}8 {mu}m feature can be emission from solid SiO, which naturally occurs in submicron silicate ''smokes'' created by quickly condensing vaporized silicate.

  3. Consistency of the Planck CMB data and ΛCDM cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Shafieloo, Arman [Korea Astronomy and Space Science Institute, Daejeon, 34055 (Korea, Republic of); Hazra, Dhiraj Kumar, E-mail: shafieloo@kasi.re.kr, E-mail: dhiraj.kumar.hazra@apc.univ-paris7.fr [AstroParticule et Cosmologie (APC)/Paris Centre for Cosmological Physics, Université Paris Diderot, CNRS/IN2P3, CEA/lrfu, Observatoire de Paris, Sorbonne Paris Cité, 10, rue Alice Domon et Leonie Duquet, Paris Cedex 13, 75205 France (France)

    2017-04-01

    We test the consistency between Planck temperature and polarization power spectra and the concordance model of Λ Cold Dark Matter cosmology (ΛCDM) within the framework of Crossing statistics. We find that Planck TT best fit ΛCDM power spectrum is completely consistent with EE power spectrum data while EE best fit ΛCDM power spectrum is not consistent with TT data. However, this does not point to any systematic or model-data discrepancy since in the Planck EE data, uncertainties are much larger compared to the TT data. We also investigate the possibility of any deviation from ΛCDM model analyzing the Planck 2015 data. Results from TT, TE and EE data analysis indicate that no deviation is required beyond the flexibility of the concordance ΛCDM model. Our analysis thus rules out any strong evidence for beyond the concordance model in the Planck spectra data. We also report a mild amplitude difference comparing temperature and polarization data, where temperature data seems to have slightly lower amplitude than expected (consistently at all multiples), as we assume both temperature and polarization data are realizations of the same underlying cosmology.

  4. Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.

    Science.gov (United States)

    Werner, Tomás

    2015-07-01

    Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.

  5. Road Service Performance Based On Integrated Road Design Consistency (IC Along Federal Road F0023

    Directory of Open Access Journals (Sweden)

    Zainal Zaffan Farhana

    2017-01-01

    Full Text Available Road accidents are one of the world’s largest public health and injury prevention problems. In Malaysia, the west coast area of Malaysia been stated as the highest motorcycle fatalities and road accidents are one of the factors that cause of death and injuries in this country. The most common fatal accident is between a motorcycle and passenger car. The most of the fatal accidents happened on Federal roads with 44 fatal accidents reported, which is equal to 29%. Lacks of road geometric designs consistency where the drivers make mistakes errors due to the road geometric features causes the accident kept rising in Malaysia. Hence, models are based on operating speed to calculate design consistency of road. The profiles were obtained by continuous speed profile using GPS data. The continuous operating speed profile models were plotted based on operating speed model (85th percentile. The study was conduct at F0023 from km 16 until km 20. The purpose of design consistency is to know the relationship between the operating speed and elements of geometric design on the road. As a result, the integrated design consistency motorcycle and cars along a segment at F0023, the threshold shows poor design quality for motorcycles and cars.

  6. Self-consistent tight-binding model of B and N doping in graphene

    DEFF Research Database (Denmark)

    Pedersen, Thomas Garm; Pedersen, Jesper Goor

    2013-01-01

    . The impurity potential depends sensitively on the impurity occupancy, leading to a self-consistency requirement. We solve this problem using the impurity Green's function and determine the self-consistent local density of states at the impurity site and, thereby, identify acceptor and donor energy resonances.......Boron and nitrogen substitutional impurities in graphene are analyzed using a self-consistent tight-binding approach. An analytical result for the impurity Green's function is derived taking broken electron-hole symmetry into account and validated by comparison to numerical diagonalization...

  7. RPA method based on the self-consistent cranking model for 168Er and 158Dy

    International Nuclear Information System (INIS)

    Kvasil, J.; Cwiok, S.; Chariev, M.M.; Choriev, B.

    1983-01-01

    The low-lying nuclear states in 168 Er and 158 Dy are analysed within the random phase approximation (RPA) method based on the self-consistent cranking model (SCCM). The moment of inertia, the value of chemical potential, and the strength constant k 1 have been obtained from the symmetry condition. The pairing strength constants Gsub(tau) have been determined from the experimental values of neutron and proton pairing energies for nonrotating nuclei. A quite good agreement with experimental energies of states with positive parity was obtained without introducing the two-phonon vibrational states

  8. Self-consistent gyrokinetic modeling of neoclassical and turbulent impurity transport

    Science.gov (United States)

    Estève, D.; Sarazin, Y.; Garbet, X.; Grandgirard, V.; Breton, S.; Donnel, P.; Asahi, Y.; Bourdelle, C.; Dif-Pradalier, G.; Ehrlacher, C.; Emeriau, C.; Ghendrih, Ph.; Gillot, C.; Latu, G.; Passeron, C.

    2018-03-01

    Trace impurity transport is studied with the flux-driven gyrokinetic GYSELA code (Grandgirard et al 2016 Comput. Phys. Commun. 207 35). A reduced and linearized multi-species collision operator has been recently implemented, so that both neoclassical and turbulent transport channels can be treated self-consistently on an equal footing. In the Pfirsch-Schlüter regime that is probably relevant for tungsten, the standard expression for the neoclassical impurity flux is shown to be recovered from gyrokinetics with the employed collision operator. Purely neoclassical simulations of deuterium plasma with trace impurities of helium, carbon and tungsten lead to impurity diffusion coefficients, inward pinch velocities due to density peaking, and thermo-diffusion terms which quantitatively agree with neoclassical predictions and NEO simulations (Belli et al 2012 Plasma Phys. Control. Fusion 54 015015). The thermal screening factor appears to be less than predicted analytically in the Pfirsch-Schlüter regime, which can be detrimental to fusion performance. Finally, self-consistent nonlinear simulations have revealed that the tungsten impurity flux is not the sum of turbulent and neoclassical fluxes computed separately, as is usually assumed. The synergy partly results from the turbulence-driven in-out poloidal asymmetry of tungsten density. This result suggests the need for self-consistent simulations of impurity transport, i.e. including both turbulence and neoclassical physics, in view of quantitative predictions for ITER.

  9. Methodology and consistency of slant and vertical assessments for ionospheric electron content models

    Science.gov (United States)

    Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül

    2017-12-01

    A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.

  10. Self-consistency corrections in effective-interaction calculations

    International Nuclear Information System (INIS)

    Starkand, Y.; Kirson, M.W.

    1975-01-01

    Large-matrix extended-shell-model calculations are used to compute self-consistency corrections to the effective interaction and to the linked-cluster effective interaction. The corrections are found to be numerically significant and to affect the rate of convergence of the corresponding perturbation series. The influence of various partial corrections is tested. It is concluded that self-consistency is an important effect in determining the effective interaction and improving the rate of convergence. (author)

  11. Physical inversion of the full IASI spectra: Assessment of atmospheric parameters retrievals, consistency of spectroscopy and forward modelling

    International Nuclear Information System (INIS)

    Liuzzi, G.; Masiello, G.; Serio, C.; Venafra, S.; Camy-Peyret, C.

    2016-01-01

    Spectra observed by the Infrared Atmospheric Sounder Interferometer (IASI) have been used to assess both retrievals and the spectral quality and consistency of current forward models and spectroscopic databases for atmospheric gas line and continuum absorption. The analysis has been performed with thousands of observed spectra over sea surface in the Pacific Ocean close to the Mauna Loa (Hawaii) validation station. A simultaneous retrieval for surface temperature, atmospheric temperature, H_2O, HDO, O_3 profiles and gas average column abundance of CO_2, CO, CH_4, SO_2, N_2O, HNO_3, NH_3, OCS and CF_4 has been performed and compared to in situ observations. The retrieval system considers the full IASI spectrum (all 8461 spectral channels on the range 645–2760 cm"−"1). We have found that the average column amount of atmospheric greenhouse gases can be retrieved with a precision better than 1% in most cases. The analysis of spectral residuals shows that, after inversion, they are generally reduced to within the IASI radiometric noise. However, larger residuals still appear for many of the most abundant gases, namely H_2O, CH_4 and CO_2. The H_2O ν_2 spectral region is in general warmer (higher radiance) than observations. The CO_2ν_2 and N_2O/CO_2ν_3 spectral regions now show a consistent behavior for channels, which are probing the troposphere. Updates in CH_4 spectroscopy do not seem to improve the residuals. The effect of isotopic fractionation of HDO is evident in the 2500–2760 cm"−"1 region and in the atmospheric window around 1200 cm"−"1. - Highlights: • This is the first work that uses the full IASI spectrum. This aspect is new and unique. • Simultaneous retrieval of the average amount of CO_2, N_2O, CO, CH_4, SO_2, HNO_3, NH_3, OCS and CF_4, T, H_2O, HDO, O_3 profiles, and T_s. • Assessment of spectroscopy consistency over the full IASI spectrum (645 to 2760 cm"−"1). • Two-year record of IASI retrievals are available on request, compared

  12. Application of the adiabatic self-consistent collective coordinate method to a solvable model of prolate-oblate shape coexistence

    International Nuclear Information System (INIS)

    Kobayasi, Masato; Matsuyanagi, Kenichi; Nakatsukasa, Takashi; Matsuo, Masayuki

    2003-01-01

    The adiabatic self-consistent collective coordinate method is applied to an exactly solvable multi-O(4) model that is designed to describe nuclear shape coexistence phenomena. The collective mass and dynamics of large amplitude collective motion in this model system are analyzed, and it is shown that the method yields a faithful description of tunneling motion through a barrier between the prolate and oblate local minima in the collective potential. The emergence of the doublet pattern is clearly described. (author)

  13. On the internal consistency of holographic dark energy models

    International Nuclear Information System (INIS)

    Horvat, R

    2008-01-01

    Holographic dark energy (HDE) models, underpinned by an effective quantum field theory (QFT) with a manifest UV/IR connection, have become convincing candidates for providing an explanation of the dark energy in the universe. On the other hand, the maximum number of quantum states that a conventional QFT for a box of size L is capable of describing relates to those boxes which are on the brink of experiencing a sudden collapse to a black hole. Another restriction on the underlying QFT is that the UV cut-off, which cannot be chosen independently of the IR cut-off and therefore becomes a function of time in a cosmological setting, should stay the largest energy scale even in the standard cosmological epochs preceding a dark energy dominated one. We show that, irrespective of whether one deals with the saturated form of HDE or takes a certain degree of non-saturation in the past, the above restrictions cannot be met in a radiation dominated universe, an epoch in the history of the universe which is expected to be perfectly describable within conventional QFT

  14. REPFLO model evaluation, physical and numerical consistency

    International Nuclear Information System (INIS)

    Wilson, R.N.; Holland, D.H.

    1978-11-01

    This report contains a description of some suggested changes and an evaluation of the REPFLO computer code, which models ground-water flow and nuclear-waste migration in and about a nuclear-waste repository. The discussion contained in the main body of the report is supplemented by a flow chart, presented in the Appendix of this report. The suggested changes are of four kinds: (1) technical changes to make the code compatible with a wider variety of digital computer systems; (2) changes to fill gaps in the computer code, due to missing proprietary subroutines; (3) changes to (a) correct programming errors, (b) correct logical flaws, and (c) remove unnecessary complexity; and (4) changes in the computer code logical structure to make REPFLO a more viable model from the physical point of view

  15. Self-consistent electron transport in collisional plasmas

    International Nuclear Information System (INIS)

    Mason, R.J.

    1982-01-01

    A self-consistent scheme has been developed to model electron transport in evolving plasmas of arbitrary classical collisionality. The electrons and ions are treated as either multiple donor-cell fluids, or collisional particles-in-cell. Particle suprathermal electrons scatter off ions, and drag against fluid background thermal electrons. The background electrons undergo ion friction, thermal coupling, and bremsstrahlung. The components move in self-consistent advanced E-fields, obtained by the Implicit Moment Method, which permits Δt >> ω/sub p/ -1 and Δx >> lambda/sub D/ - offering a 10 2 - 10 3 -fold speed-up over older explicit techniques. The fluid description for the background plasma components permits the modeling of transport in systems spanning more than a 10 7 -fold change in density, and encompassing contiguous collisional and collisionless regions. Results are presented from application of the scheme to the modeling of CO 2 laser-generated suprathermal electron transport in expanding thin foils, and in multi-foil target configurations

  16. ICFD modeling of final settlers - developing consistent and effective simulation model structures

    DEFF Research Database (Denmark)

    Plósz, Benedek G.; Guyonvarch, Estelle; Ramin, Elham

    CFD concept. The case of secondary settling tanks (SSTs) is used to demonstrate the methodological steps using the validated CFD model with the hindered-transientcompression settling velocity model by (10). Factor screening and latin hypercube sampling (LSH) are used to degenerate a 2-D axi-symmetrical CFD...... of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Results suggest that the iCFD model developed...... the feed-layer. These scenarios were inspired by literature (1; 2; 9). As for the D0--iCFD model, values of SSRE obtained are below 1 with an average SSRE=0.206. The simulation model thus can predict the solids distribution inside the tank with a satisfactory accuracy. Averaged relative errors of 8.1 %, 3...

  17. Measuring consistency of autobiographical memory recall in depression.

    LENUS (Irish Health Repository)

    Semkovska, Maria

    2012-05-15

    Autobiographical amnesia assessments in depression need to account for normal changes in consistency over time, contribution of mood and type of memories measured. We report herein validation studies of the Columbia Autobiographical Memory Interview - Short Form (CAMI-SF), exclusively used in depressed patients receiving electroconvulsive therapy (ECT) but without previous published report of normative data. The CAMI-SF was administered twice with a 6-month interval to 44 healthy volunteers to obtain normative data for retrieval consistency of its Semantic, Episodic-Extended and Episodic-Specific components and assess their reliability and validity. Healthy volunteers showed significant large decreases in retrieval consistency on all components. The Semantic and Episodic-Specific components demonstrated substantial construct validity. We then assessed CAMI-SF retrieval consistencies over a 2-month interval in 30 severely depressed patients never treated with ECT compared with healthy controls (n=19). On initial assessment, depressed patients produced less episodic-specific memories than controls. Both groups showed equivalent amounts of consistency loss over a 2-month interval on all components. At reassessment, only patients with persisting depressive symptoms were distinguishable from controls on episodic-specific memories retrieved. Research quantifying retrograde amnesia following ECT for depression needs to control for normal loss in consistency over time and contribution of persisting depressive symptoms.

  18. Inconsistency effects in source memory and compensatory schema-consistent guessing.

    Science.gov (United States)

    Küppers, Viviane; Bayen, Ute J

    2014-10-01

    The attention-elaboration hypothesis of memory for schematically unexpected information predicts better source memory for unexpected than expected sources. In three source-monitoring experiments, the authors tested the occurrence of an inconsistency effect in source memory. Participants were presented with items that were schematically either very expected or very unexpected for their source. Multinomial processing tree models were used to separate source memory, item memory, and guessing bias. Results show an inconsistency effect in source memory accompanied by a compensatory schema-consistent guessing bias when expectancy strength is high, that is, when items are very expected or very unexpected for their source.

  19. Uranyl adsorption and surface speciation at the imogolite-water interface: Self-consistent spectroscopic and surface complexation models

    Science.gov (United States)

    Arai, Y.; McBeath, M.; Bargar, J.R.; Joye, J.; Davis, J.A.

    2006-01-01

    Macro- and molecular-scale knowledge of uranyl (U(VI)) partitioning reactions with soil/sediment mineral components is important in predicting U(VI) transport processes in the vadose zone and aquifers. In this study, U(VI) reactivity and surface speciation on a poorly crystalline aluminosilicate mineral, synthetic imogolite, were investigated using batch adsorption experiments, X-ray absorption spectroscopy (XAS), and surface complexation modeling. U(VI) uptake on imogolite surfaces was greatest at pH ???7-8 (I = 0.1 M NaNO3 solution, suspension density = 0.4 g/L [U(VI)]i = 0.01-30 ??M, equilibration with air). Uranyl uptake decreased with increasing sodium nitrate concentration in the range from 0.02 to 0.5 M. XAS analyses show that two U(VI) inner-sphere (bidentate mononuclear coordination on outer-wall aluminol groups) and one outer-sphere surface species are present on the imogolite surface, and the distribution of the surface species is pH dependent. At pH 8.8, bis-carbonato inner-sphere and tris-carbonato outer-sphere surface species are present. At pH 7, bis- and non-carbonato inner-sphere surface species co-exist, and the fraction of bis-carbonato species increases slightly with increasing I (0.1-0.5 M). At pH 5.3, U(VI) non-carbonato bidentate mononuclear surface species predominate (69%). A triple layer surface complexation model was developed with surface species that are consistent with the XAS analyses and macroscopic adsorption data. The proton stoichiometry of surface reactions was determined from both the pH dependence of U(VI) adsorption data in pH regions of surface species predominance and from bond-valence calculations. The bis-carbonato species required a distribution of surface charge between the surface and ?? charge planes in order to be consistent with both the spectroscopic and macroscopic adsorption data. This research indicates that U(VI)-carbonato ternary species on poorly crystalline aluminosilicate mineral surfaces may be important in

  20. Multi-model comparison highlights consistency in predicted effect of warming on a semi-arid shrub

    Science.gov (United States)

    Renwick, Katherine M.; Curtis, Caroline; Kleinhesselink, Andrew R.; Schlaepfer, Daniel R.; Bradley, Bethany A.; Aldridge, Cameron L.; Poulter, Benjamin; Adler, Peter B.

    2018-01-01

    A number of modeling approaches have been developed to predict the impacts of climate change on species distributions, performance, and abundance. The stronger the agreement from models that represent different processes and are based on distinct and independent sources of information, the greater the confidence we can have in their predictions. Evaluating the level of confidence is particularly important when predictions are used to guide conservation or restoration decisions. We used a multi-model approach to predict climate change impacts on big sagebrush (Artemisia tridentata), the dominant plant species on roughly 43 million hectares in the western United States and a key resource for many endemic wildlife species. To evaluate the climate sensitivity of A. tridentata, we developed four predictive models, two based on empirically derived spatial and temporal relationships, and two that applied mechanistic approaches to simulate sagebrush recruitment and growth. This approach enabled us to produce an aggregate index of climate change vulnerability and uncertainty based on the level of agreement between models. Despite large differences in model structure, predictions of sagebrush response to climate change were largely consistent. Performance, as measured by change in cover, growth, or recruitment, was predicted to decrease at the warmest sites, but increase throughout the cooler portions of sagebrush's range. A sensitivity analysis indicated that sagebrush performance responds more strongly to changes in temperature than precipitation. Most of the uncertainty in model predictions reflected variation among the ecological models, raising questions about the reliability of forecasts based on a single modeling approach. Our results highlight the value of a multi-model approach in forecasting climate change impacts and uncertainties and should help land managers to maximize the value of conservation investments.

  1. Integration and consistency testing of groundwater flow models with hydro-geochemistry in site investigations in Finland

    International Nuclear Information System (INIS)

    Pitkaenen, P.; Loefman, J.; Korkealaakso, J.; Koskinen, L.; Ruotsalainen, P.; Hautojaervi, A.; Aeikaes, T.

    1999-01-01

    In the assessment of the suitability and safety of a geological repository for radioactive waste the understanding of the fluid flow at a site is essential. In order to build confidence in the assessment of the hydrogeological performance of a site in various conditions, integration of hydrological and hydrogeochemical methods and studies provides the primary method for investigating the evolution that has taken place in the past, and for predicting future conditions at the potential disposal site. A systematic geochemical sampling campaign was started since the beginning of 1990's in the Finnish site investigation programme. This enabled the initiating of integration and evaluation of site scale hydrogeochemical and groundwater flow models. Hydrogeochemical information has been used to screen relevant external processes and variables for definition of the initial and boundary conditions in hydrological simulations. The results obtained from interpretation and modelling hydrogeochemical evolution have been employed in testing the hydrogeochemical consistency of conceptual flow models. Integration and testing of flow models with hydrogeochemical information are considered to improve significantly the hydrogeological understanding of a site and increases confidence in conceptual hydrogeological models. (author)

  2. Assessing atmospheric bias correction for dynamical consistency using potential vorticity

    International Nuclear Information System (INIS)

    Rocheta, Eytan; Sharma, Ashish; Evans, Jason P

    2014-01-01

    Correcting biases in atmospheric variables prior to impact studies or dynamical downscaling can lead to new biases as dynamical consistency between the ‘corrected’ fields is not maintained. Use of these bias corrected fields for subsequent impact studies and dynamical downscaling provides input conditions that do not appropriately represent intervariable relationships in atmospheric fields. Here we investigate the consequences of the lack of dynamical consistency in bias correction using a measure of model consistency—the potential vorticity (PV). This paper presents an assessment of the biases present in PV using two alternative correction techniques—an approach where bias correction is performed individually on each atmospheric variable, thereby ignoring the physical relationships that exists between the multiple variables that are corrected, and a second approach where bias correction is performed directly on the PV field, thereby keeping the system dynamically coherent throughout the correction process. In this paper we show that bias correcting variables independently results in increased errors above the tropopause in the mean and standard deviation of the PV field, which are improved when using the alternative proposed. Furthermore, patterns of spatial variability are improved over nearly all vertical levels when applying the alternative approach. Results point to a need for a dynamically consistent atmospheric bias correction technique which results in fields that can be used as dynamically consistent lateral boundaries in follow-up downscaling applications. (letter)

  3. A self-consistent model for the Galactic cosmic ray, antiproton and positron spectra

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    In this talk I will present the escape model of Galactic cosmic rays. This model explains the measured cosmic ray spectra of individual groups of nuclei from TeV to EeV energies. It predicts an early transition to extragalactic cosmic rays, in agreement with recent Auger data. The escape model also explains the soft neutrino spectrum 1/E^2.5 found by IceCube in concordance with Fermi gamma-ray data. I will show that within the same model one can explain the excess of positrons and antiprotons above 20 GeV found by PAMELA and AMS-02, the discrepancy in the slopes of the spectra of cosmic ray protons and heavier nuclei in the TeV-PeV energy range and the plateau in cosmic ray dipole anisotropy in the 2-50 TeV energy range by adding the effects of a 2 million year old nearby supernova.

  4. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    Science.gov (United States)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  5. Ensuring consistency and persistence to the Quality Information Model - The role of the GeoViQua Broker

    Science.gov (United States)

    Bigagli, Lorenzo; Papeschi, Fabrizio; Nativi, Stefano; Bastin, Lucy; Masó, Joan

    2013-04-01

    a few products are annotated with their PID; recent studies show that on a total of about 100000 Clearinghouse products, only 37 have the Product Identifier. Furthermore the association should be persistent within the GeoViQua scope. GeoViQua architecture is built on the brokering approach successfully experimented within the EuroGEOSS project and realized by the GEO DAB (Discovery and Access Broker). Part of the GEOSS Common Infrastructure (GCI), the GEO DAB allows for harmonization and distribution in a transparent way for both users and data providers. This way, GeoViQua can effectively complement and extend the GEO DAB obtaining a Quality-augmentation broker (GeoViQua Broker) which plays a central role in ensuring the consistency of the Producer and User quality models. This work is focused on the typical use case in which the GeoViQua Broker performs data discovery from different data providers, and then integrates in the Quality Information Model the producer quality report with the feedback given by users. In particular, this work highlights the problems faced by the GeoViQua Broker and the techniques adopted to ensure consistency and persistency also for quality reports whose target products are not annotated with a PID. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 265178.

  6. A Microelectrode Array with Reproducible Performance Shows Loss of Consistency Following Functionalization with a Self-Assembled 6-Mercapto-1-hexanol Layer

    Directory of Open Access Journals (Sweden)

    Damion K. Corrigan

    2018-06-01

    Full Text Available For analytical applications involving label-free biosensors and multiple measurements, i.e., across an electrode array, it is essential to develop complete sensor systems capable of functionalization and of producing highly consistent responses. To achieve this, a multi-microelectrode device bearing twenty-four equivalent 50 µm diameter Pt disc microelectrodes was designed in an integrated 3-electrode system configuration and then fabricated. Cyclic voltammetry and electrochemical impedance spectroscopy were used for initial electrochemical characterization of the individual working electrodes. These confirmed the expected consistency of performance with a high degree of measurement reproducibility for each microelectrode across the array. With the aim of assessing the potential for production of an enhanced multi-electrode sensor for biomedical use, the working electrodes were then functionalized with 6-mercapto-1-hexanol (MCH. This is a well-known and commonly employed surface modification process, which involves the same principles of thiol attachment chemistry and self-assembled monolayer (SAM formation commonly employed in the functionalization of electrodes and the formation of biosensors. Following this SAM formation, the reproducibility of the observed electrochemical signal between electrodes was seen to decrease markedly, compromising the ability to achieve consistent analytical measurements from the sensor array following this relatively simple and well-established surface modification. To successfully and consistently functionalize the sensors, it was necessary to dilute the constituent molecules by a factor of ten thousand to support adequate SAM formation on microelectrodes. The use of this multi-electrode device therefore demonstrates in a high throughput manner irreproducibility in the SAM formation process at the higher concentration, even though these electrodes are apparently functionalized simultaneously in the same film

  7. Is the island universe model consistent with observations?

    OpenAIRE

    Piao, Yun-Song

    2005-01-01

    We study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. We examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.

  8. Consistence of Network Filtering Rules

    Institute of Scientific and Technical Information of China (English)

    SHE Kun; WU Yuancheng; HUANG Juncai; ZHOU Mingtian

    2004-01-01

    The inconsistence of firewall/VPN(Virtual Private Network) rule makes a huge maintainable cost.With development of Multinational Company,SOHO office,E-government the number of firewalls/VPN will increase rapidly.Rule table in stand-alone or network will be increased in geometric series accordingly.Checking the consistence of rule table manually is inadequate.A formal approach can define semantic consistence,make a theoretic foundation of intelligent management about rule tables.In this paper,a kind of formalization of host rules and network ones for auto rule-validation based on SET theory were proporsed and a rule validation scheme was defined.The analysis results show the superior performance of the methods and demonstrate its potential for the intelligent management based on rule tables.

  9. Self-consistent treatment of spin and magnetization dynamic effect in spin transfer switching

    International Nuclear Information System (INIS)

    Guo Jie; Tan, Seng Ghee; Jalil, Mansoor Bin Abdul; Koh, Dax Enshan; Han, Guchang; Meng, Hao

    2011-01-01

    The effect of itinerant spin moment (m) dynamic in spin transfer switching has been ignored in most previous theoretical studies of the magnetization (M) dynamics. Thus in this paper, we proposed a more refined micromagnetic model of spin transfer switching that takes into account in a self-consistent manner of the coupled m and M dynamics. The numerical results obtained from this model further shed insight on the switching profiles of m and M, both of which show particular sensitivity to parameters such as the anisotropy field, the spin torque field, and the initial deviation between m and M.

  10. Measuring consistency of autobiographical memory recall in depression.

    Science.gov (United States)

    Semkovska, Maria; Noone, Martha; Carton, Mary; McLoughlin, Declan M

    2012-05-15

    Autobiographical amnesia assessments in depression need to account for normal changes in consistency over time, contribution of mood and type of memories measured. We report herein validation studies of the Columbia Autobiographical Memory Interview - Short Form (CAMI-SF), exclusively used in depressed patients receiving electroconvulsive therapy (ECT) but without previous published report of normative data. The CAMI-SF was administered twice with a 6-month interval to 44 healthy volunteers to obtain normative data for retrieval consistency of its Semantic, Episodic-Extended and Episodic-Specific components and assess their reliability and validity. Healthy volunteers showed significant large decreases in retrieval consistency on all components. The Semantic and Episodic-Specific components demonstrated substantial construct validity. We then assessed CAMI-SF retrieval consistencies over a 2-month interval in 30 severely depressed patients never treated with ECT compared with healthy controls (n=19). On initial assessment, depressed patients produced less episodic-specific memories than controls. Both groups showed equivalent amounts of consistency loss over a 2-month interval on all components. At reassessment, only patients with persisting depressive symptoms were distinguishable from controls on episodic-specific memories retrieved. Research quantifying retrograde amnesia following ECT for depression needs to control for normal loss in consistency over time and contribution of persisting depressive symptoms. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. Towards a Self-Consistent Physical Framework for Modeling Coupled Human and Physical Activities during the Anthropocene

    Science.gov (United States)

    Garrett, T. J.

    2014-12-01

    Studies of the response of global climate to anthropogenic activities rely upon scenarios for future human activity to provide a range of possible trajectories for greenhouse gases emissions over the coming century. Sophisticated integrated models are used to explore not only what will happen, but what should happen in order to optimize societal well-being. Hundreds of equations might be used to account for the interplay between human decisions, technological change, and macroeconomic priniciples. In contrast, the model equations used to describe geophysical phenomena look very different because they are a) purely deterministic and b) consistent with basic thermodynamic laws. This inconsistency between macroeconomics and physics suggests a rather unhappy marriage. During the Anthropocene the evolution of humanity and our environment will become increasingly intertwined. Representing such a coupling suggests a need for a common theoretical basis. To this end, the approach that is described here is to treat civilization like any other physical process, that is as an open, non-equilibrium thermodynamic system that dissipates energy and diffuses matter in order to sustain existing circulations and to further its material growth. Theoretical arguments and over 40 years of measurements show that a very general representation of global economic wealth (not GDP) has been tied to rates of global primary energy consumption through a constant 7.1 ± 0.1 mW per year 2005 USD. This link between physics and economics leads to very simple expressions for how fast civilization and its rate of energy consumption grow. These are expressible as a function of rates of energy and material resource discovery and depletion, and of the magnitude of externally imposed decay. The equations are validated through hindcasts that show, for example, that economic conditions in the 1950s can be invoked to make remarkably accurate forecasts of present rates of global GDP growth and primary energy

  12. The Consistent Kinetics Porosity (CKP) Model: A Theory for the Mechanical Behavior of Moderately Porous Solids

    Energy Technology Data Exchange (ETDEWEB)

    BRANNON,REBECCA M.

    2000-11-01

    A theory is developed for the response of moderately porous solids (no more than {approximately}20% void space) to high-strain-rate deformations. The model is consistent because each feature is incorporated in a manner that is mathematically compatible with the other features. Unlike simple p-{alpha} models, the onset of pore collapse depends on the amount of shear present. The user-specifiable yield function depends on pressure, effective shear stress, and porosity. The elastic part of the strain rate is linearly related to the stress rate, with nonlinear corrections from changes in the elastic moduli due to pore collapse. Plastically incompressible flow of the matrix material allows pore collapse and an associated macroscopic plastic volume change. The plastic strain rate due to pore collapse/growth is taken normal to the yield surface. If phase transformation and/or pore nucleation are simultaneously occurring, the inelastic strain rate will be non-normal to the yield surface. To permit hardening, the yield stress of matrix material is treated as an internal state variable. Changes in porosity and matrix yield stress naturally cause the yield surface to evolve. The stress, porosity, and all other state variables vary in a consistent manner so that the stress remains on the yield surface throughout any quasistatic interval of plastic deformation. Dynamic loading allows the stress to exceed the yield surface via an overstress ordinary differential equation that is solved in closed form for better numerical accuracy. The part of the stress rate that causes no plastic work (i.e-, the part that has a zero inner product with the stress deviator and the identity tensor) is given by the projection of the elastic stressrate orthogonal to the span of the stress deviator and the identity tensor.The model, which has been numerically implemented in MIG format, has been exercised under a wide array of extremal loading and unloading paths. As will be discussed in a companion

  13. A self-consistent kinetic modeling of a 1-D, bounded, plasma in ...

    Indian Academy of Sciences (India)

    ions, consistent with the idea of scattering off a random collection of stationary scattering points, while it yields a constant for slow ions, consistent with the idea of collisions experienced by a stationary particle in an ideal gas. For this treatment, o has been assumed independent of position. Pramana – J. Phys., Vol. 55, Nos 5 ...

  14. Multinational consistency of a discrete choice model in quantifying health states for the extended 5-level EQ-5D

    NARCIS (Netherlands)

    Krabbe, P.F.M.; Devlin, N.J.; Stolk, E.A.; Shah, K.K.; Oppe, M.; Van Hout, B.; Quik, E.H.; Pickard, A.S.; Xie, F.

    2013-01-01

    Objectives: To investigate the feasibility of choice experiments for EQ-5D-5L states using computer-based data collection, and to examine the consistency of the estimated parameters values derived after modeling the stated preference data across countries in a multinational study. Methods: Similar

  15. Consistency of different tropospheric models and mapping functions for precise GNSS processing

    Science.gov (United States)

    Graffigna, Victoria; Hernández-Pajares, Manuel; García-Rigo, Alberto; Gende, Mauricio

    2017-04-01

    The TOmographic Model of the IONospheric electron content (TOMION) software implements a simultaneous precise geodetic and ionospheric modeling, which can be used to test new approaches for real-time precise GNSS modeling (positioning, ionospheric and tropospheric delays, clock errors, among others). In this work, the software is used to estimate the Zenith Tropospheric Delay (ZTD) emulating real time and its performance is evaluated through a comparative analysis with a built-in GIPSY estimation and IGS final troposphere product, exemplified in a two-day experiment performed in East Australia. Furthermore, the troposphere mapping function was upgraded from Niell to Vienna approach. On a first scenario, only forward processing was activated and the coordinates of the Wide Area GNSS network were loosely constrained, without fixing the carrier phase ambiguities, for both reference and rover receivers. On a second one, precise point positioning (PPP) was implemented, iterating for a fixed coordinates set for the second day. Comparisons between TOMION, IGS and GIPSY estimates have been performed and for the first one, IGS clocks and orbits were considered. The agreement with GIPSY results seems to be 10 times better than with the IGS final ZTD product, despite having considered IGS products for the computations. Hence, the subsequent analysis was carried out with respect to the GIPSY computations. The estimates show a typical bias of 2cm for the first strategy and of 7mm for PPP, in the worst cases. Moreover, Vienna mapping function showed in general a fairly better agreement than Niell one for both strategies. The RMS values' were found to be around 1cm for all studied situations, with a slightly fitter performance for the Niell one. Further improvement could be achieved for such estimations with coefficients for the Vienna mapping function calculated from raytracing as well as integrating meteorological comparative parameters.

  16. Self-consistent model of the Rayleigh--Taylor instability in ablatively accelerated laser plasma

    International Nuclear Information System (INIS)

    Bychkov, V.V.; Golberg, S.M.; Liberman, M.A.

    1994-01-01

    A self-consistent approach to the problem of the growth rate of the Rayleigh--Taylor instability in laser accelerated targets is developed. The analytical solution of the problem is obtained by solving the complete system of the hydrodynamical equations which include both thermal conductivity and energy release due to absorption of the laser light. The developed theory provides a rigorous justification for the supplementary boundary condition in the limiting case of the discontinuity model. An analysis of the suppression of the Rayleigh--Taylor instability by the ablation flow is done and it is found that there is a good agreement between the obtained solution and the approximate formula σ = 0.9√gk - 3u 1 k, where g is the acceleration, u 1 is the ablation velocity. This paper discusses different regimes of the ablative stabilization and compares them with previous analytical and numerical works

  17. Personalized recommendation based on unbiased consistence

    Science.gov (United States)

    Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao

    2015-08-01

    Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.

  18. Self-consistent approach for neutral community models with speciation

    NARCIS (Netherlands)

    Haegeman, Bart; Etienne, Rampal S.

    Hubbell's neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is

  19. Consistency test of the standard model

    International Nuclear Information System (INIS)

    Pawlowski, M.; Raczka, R.

    1997-01-01

    If the 'Higgs mass' is not the physical mass of a real particle but rather an effective ultraviolet cutoff then a process energy dependence of this cutoff must be admitted. Precision data from at least two energy scale experimental points are necessary to test this hypothesis. The first set of precision data is provided by the Z-boson peak experiments. We argue that the second set can be given by 10-20 GeV e + e - colliders. We pay attention to the special role of tau polarization experiments that can be sensitive to the 'Higgs mass' for a sample of ∼ 10 8 produced tau pairs. We argue that such a study may be regarded as a negative selfconsistency test of the Standard Model and of most of its extensions

  20. Large tensor mode, field range bound and consistency in generalized G-inflation

    International Nuclear Information System (INIS)

    Kunimitsu, Taro; Suyama, Teruaki; Watanabe, Yuki; Yokoyama, Jun'ichi

    2015-01-01

    We systematically show that in potential driven generalized G-inflation models, quantum corrections coming from new physics at the strong coupling scale can be avoided, while producing observable tensor modes. The effective action can be approximated by the tree level action, and as a result, these models are internally consistent, despite the fact that we introduced new mass scales below the energy scale of inflation. Although observable tensor modes are produced with sub-strong coupling scale field excursions, this is not an evasion of the Lyth bound, since the models include higher-derivative non-canonical kinetic terms, and effective rescaling of the field would result in super-Planckian field excursions. We argue that the enhanced kinetic term of the inflaton screens the interactions with other fields, keeping the system weakly coupled during inflation

  1. Large tensor mode, field range bound and consistency in generalized G-inflation

    Energy Technology Data Exchange (ETDEWEB)

    Kunimitsu, Taro; Suyama, Teruaki; Watanabe, Yuki; Yokoyama, Jun' ichi, E-mail: kunimitsu@resceu.s.u-tokyo.ac.jp, E-mail: suyama@resceu.s.u-tokyo.ac.jp, E-mail: watanabe@resceu.s.u-tokyo.ac.jp, E-mail: yokoyama@resceu.s.u-tokyo.ac.jp [Research Center for the Early Universe, Graduate School of Science, The University of Tokyo, Tokyo 113-0033 (Japan)

    2015-08-01

    We systematically show that in potential driven generalized G-inflation models, quantum corrections coming from new physics at the strong coupling scale can be avoided, while producing observable tensor modes. The effective action can be approximated by the tree level action, and as a result, these models are internally consistent, despite the fact that we introduced new mass scales below the energy scale of inflation. Although observable tensor modes are produced with sub-strong coupling scale field excursions, this is not an evasion of the Lyth bound, since the models include higher-derivative non-canonical kinetic terms, and effective rescaling of the field would result in super-Planckian field excursions. We argue that the enhanced kinetic term of the inflaton screens the interactions with other fields, keeping the system weakly coupled during inflation.

  2. Dynamically consistent oil import tariffs

    International Nuclear Information System (INIS)

    Karp, L.; Newbery, D.M.

    1992-01-01

    The standard theory of optimal tariffs considers tariffs on perishable goods produced abroad under static conditions, in which tariffs affect prices only in that period. Oil and other exhaustable resources do not fit this model, for current tariffs affect the amount of oil imported, which will affect the remaining stock and hence its future price. The problem of choosing a dynamically consistent oil import tariff when suppliers are competitive but importers have market power is considered. The open-loop Nash tariff is solved for the standard competitive case in which the oil price is arbitraged, and it was found that the resulting tariff rises at the rate of interest. This tariff was found to have an equilibrium that in general is dynamically inconsistent. Nevertheless, it is shown that necessary and sufficient conditions exist under which the tariff satisfies the weaker condition of time consistency. A dynamically consistent tariff is obtained by assuming that all agents condition their current decisions on the remaining stock of the resource, in contrast to open-loop strategies. For the natural case in which all agents choose their actions simultaneously in each period, the dynamically consistent tariff was characterized, and found to differ markedly from the time-inconsistent open-loop tariff. It was shown that if importers do not have overwhelming market power, then the time path of the world price is insensitive to the ability to commit, as is the level of wealth achieved by the importer. 26 refs., 4 figs

  3. Consistency in the description of diffusion in compacted bentonite

    International Nuclear Information System (INIS)

    Lehikoinen, J.; Muurinen, A.

    2009-01-01

    A macro-level diffusion model, which aims to provide a unifying framework for explaining the experimentally observed co-ion exclusion and greatly controversial counter-ion surface diffusion in a consistent fashion, is presented. It is explained in detail why a term accounting for the non-zero mobility of the counter-ion surface excess is required in the mathematical form of the macroscopic diffusion flux. The prerequisites for the consistency of the model and the problems associated with the interpretation of diffusion in such complex pore geometries as in compacted smectite clays are discussed. (author)

  4. The self-consistent field model for Fermi systems with account of three-body interactions

    Directory of Open Access Journals (Sweden)

    Yu.M. Poluektov

    2015-12-01

    Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.

  5. The cluster bootstrap consistency in generalized estimating equations

    KAUST Repository

    Cheng, Guang

    2013-03-01

    The cluster bootstrap resamples clusters or subjects instead of individual observations in order to preserve the dependence within each cluster or subject. In this paper, we provide a theoretical justification of using the cluster bootstrap for the inferences of the generalized estimating equations (GEE) for clustered/longitudinal data. Under the general exchangeable bootstrap weights, we show that the cluster bootstrap yields a consistent approximation of the distribution of the regression estimate, and a consistent approximation of the confidence sets. We also show that a computationally more efficient one-step version of the cluster bootstrap provides asymptotically equivalent inference. © 2012.

  6. Transport simulations of ohmic TFTR experiments with profile-consistent microinstability-based models for chi/sub e/ and chi/sub i/

    International Nuclear Information System (INIS)

    Redi, M.H.; Tang, W.M.; Efthimion, P.C.; Mikkelsen, D.R.; Schmidt, G.L.

    1987-03-01

    Transport simulations of ohmically heated TFTR experiments with recently developed profile-consistent microinstability models for the anomalous thermal diffusivities, chi/sub e/ and chi/sub i/, give good agreement with experimental data. The steady-state temperature profiles and the total energy confinement times, tau/sub e/, were found to agree for each of the ohmic TFTR experiments simulated, including three high radiation cases and two plasmas fueled by pellet injection. Both collisional and collisionless models are tested. The trapped-electron drift wave microinstability model results are consistent with the thermal confinement of large plasma ohmic experiments on TFTR. We also find that transport due to the toroidal ion temperature gradient (eta/sub i/) modes can cause saturation in tau/sub E/ at the highest densities comparable to that observed on TFTR and equivalent to a neoclassical anomaly factor of 3. Predictions based on stabilized eta/sub i/-mode-driven ion transport are found to be in agreement with the enhanced global energy confinement times for pellet-fueled plasmas. 33 refs., 26 figs., 4 tabs

  7. Self-consistent clustering analysis: an efficient multiscale scheme for inelastic heterogeneous materials

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Z.; Bessa, M. A.; Liu, W.K.

    2017-10-25

    A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical and concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about

  8. Time-consistent and market-consistent evaluations

    NARCIS (Netherlands)

    Pelsser, A.; Stadje, M.A.

    2014-01-01

    We consider evaluation methods for payoffs with an inherent financial risk as encountered for instance for portfolios held by pension funds and insurance companies. Pricing such payoffs in a way consistent to market prices typically involves combining actuarial techniques with methods from

  9. Self-consistent chaos in the beam-plasma instability

    International Nuclear Information System (INIS)

    Tennyson, J.L.; Meiss, J.D.

    1993-01-01

    The effect of self-consistency on Hamiltonian systems with a large number of degrees-of-freedom is investigated for the beam-plasma instability using the single-wave model of O'Neil, Winfrey, and Malmberg.The single-wave model is reviewed and then rederived within the Hamiltonian context, which leads naturally to canonical action- angle variables. Simulations are performed with a large (10 4 ) number of beam particles interacting with the single wave. It is observed that the system relaxes into a time asymptotic periodic state where only a few collective degrees are active; namely, a clump of trapped particles oscillating in a modulated wave, within a uniform chaotic sea with oscillating phase space boundaries. Thus self-consistency is seen to effectively reduce the number of degrees- of-freedom. A simple low degree-of-freedom model is derived that treats the clump as a single macroparticle, interacting with the wave and chaotic sea. The uniform chaotic sea is modeled by a fluid waterbag, where the waterbag boundaries correspond approximately to invariant tori. This low degree-of-freedom model is seen to compare well with the simulation

  10. Consistency of students’ conceptions of wave propagation: Findings from a conceptual survey in mechanical waves

    Directory of Open Access Journals (Sweden)

    Apisit Tongchai

    2011-07-01

    have different advantages and disadvantages. Our findings show that model analysis can be used in more diverse ways, provides flexibility in analyzing multiple-choice questions, and provides more information about consistency and inconsistency of student conceptions. An unexpected finding is that studying waves in other contexts (for example, quantum mechanics or electromagnetism leads to more consistent answers about mechanical waves. The suggestion is that studying more abstract topics may solidify students’ understanding of more concrete waves. While this might be considered to be intuitive, we have not actually found direct empirical studies supporting this conjecture.

  11. Consistency of students’ conceptions of wave propagation: Findings from a conceptual survey in mechanical waves

    Directory of Open Access Journals (Sweden)

    Chernchok Soankwan

    2011-07-01

    Full Text Available We recently developed a multiple-choice conceptual survey in mechanical waves. The development, evaluation, and demonstration of the use of the survey were reported elsewhere [ A. Tongchai et al. Int. J. Sci. Educ. 31 2437 (2009]. We administered the survey to 902 students from seven different groups ranging from high school to second year university. As an outcome of that analysis we were able to identify several conceptual models which the students seemed to be using when answering the questions in the survey. In this paper we attempt to investigate the strength with which the students were committed to these conceptual models, as evidenced by the consistency with which they answered the questions. For this purpose we focus on the patterns of student responses to questions in one particular subtopic, wave propagation. This study has three main purposes: (1 to investigate the consistency of student conceptions, (2 to explore the relative usefulness of different analysis techniques, and (3 to determine what extra information a study of consistency can give about student understanding of basic concepts. We used two techniques: first, categorizing and counting, which is widely used in the science education community, and second, model analysis, recently introduced into physics education research. The manner in which categorizing and counting is used is very diverse while model analysis has been employed only in prescriptive ways. Research studies have reported that students often use their conceptual models inconsistently when solving a series of questions that test the same idea. Our results support their conclusions. Moreover, our findings suggest that students who have had more experiences in physics learning seem to use the scientifically accepted models more consistently. Further, the two analysis techniques have different advantages and disadvantages. Our findings show that model analysis can be used in more diverse ways, provides

  12. Providing comprehensive and consistent access to astronomical observatory archive data: the NASA archive model

    Science.gov (United States)

    McGlynn, Thomas; Fabbiano, Giuseppina; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; Pevunova, Olga; Imel, David; Berriman, Graham B.; Teplitz, Harry I.; Groom, Steve L.; Desai, Vandana R.; Landry, Walter

    2016-07-01

    Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.

  13. Using Trait-State Models to Evaluate the Longitudinal Consistency of Global Self-Esteem From Adolescence to Adulthood

    OpenAIRE

    Donnellan, M. Brent; Kenny, David A.; Trzesniewski, Kali H.; Lucas, Richard E.; Conger, Rand D.

    2012-01-01

    The present research used a latent variable trait-state model to evaluate the longitudinal consistency of self-esteem during the transition from adolescence to adulthood. Analyses were based on ten administrations of the Rosenberg Self-Esteem scale (Rosenberg, 1965) spanning the ages of approximately 13 to 32 for a sample of 451 participants. Results indicated that a completely stable trait factor and an autoregressive trait factor accounted for the majority of the variance in latent self-est...

  14. A stock-flow consistent input-output model with applications to energy price shocks, interest rates, and heat emissions

    Science.gov (United States)

    Berg, Matthew; Hartley, Brian; Richters, Oliver

    2015-01-01

    By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.

  15. A consistent and verifiable macroscopic model for the dissolution of liquid CO2 in water under hydrate forming conditions

    International Nuclear Information System (INIS)

    Radhakrishnan, R.; Demurov, A.; Trout, B.L.; Herzog, H.

    2003-01-01

    Direct injection of liquid CO 2 into the ocean has been proposed as one method to reduce the emission levels of CO 2 into the atmosphere. When liquid CO 2 is injected (normally as droplets) at ocean depths >500 m, a solid interfacial region between the CO 2 and the water is observed to form. This region consists of hydrate clathrates and hinders the rate of dissolution of CO 2 . It is, therefore, expected to have a significant impact on the injection of liquid CO 2 into the ocean. Up until now, no consistent and predictive model for the shrinking of droplets of CO 2 under hydrate forming conditions has been proposed. This is because all models proposed to date have had too many unknowns. By computing rates of the physical and chemical processes in hydrates via molecular dynamics simulations, we have been able to determine independently some of these unknowns. We then propose the most reasonable model and use it to make independent predictions of the rates of mass transfer and thickness of the hydrate region. These predictions are compared to measurements, and implications to the rates of shrinkage of CO 2 droplets under varying flow conditions are discussed. (author)

  16. Quantifying sources of elemental carbon over the Guanzhong Basin of China: A consistent network of measurements and WRF-Chem modeling

    International Nuclear Information System (INIS)

    Li, Nan; He, Qingyang; Tie, Xuexi; Cao, Junji; Liu, Suixin; Wang, Qiyuan; Li, Guohui; Huang, Rujin; Zhang, Qiang

    2016-01-01

    We conducted a year-long WRF-Chem (Weather Research and Forecasting Chemical) model simulation of elemental carbon (EC) aerosol and compared the modeling results to the surface EC measurements in the Guanzhong (GZ) Basin of China. The main goals of this study were to quantify the individual contributions of different EC sources to EC pollution, and to find the major cause of the EC pollution in this region. The EC measurements were simultaneously conducted at 10 urban, rural, and background sites over the GZ Basin from May 2013 to April 2014, and provided a good base against which to evaluate model simulation. The model evaluation showed that the calculated annual mean EC concentration was 5.1 μgC m −3 , which was consistent with the observed value of 5.3 μgC m −3 . Moreover, the model result also reproduced the magnitude of measured EC in all seasons (regression slope = 0.98–1.03), as well as the spatial and temporal variations (r = 0.55–0.78). We conducted several sensitivity studies to quantify the individual contributions of EC sources to EC pollution. The sensitivity simulations showed that the local and outside sources contributed about 60% and 40% to the annual mean EC concentration, respectively, implying that local sources were the major EC pollution contributors in the GZ Basin. Among the local sources, residential sources contributed the most, followed by industry and transportation sources. A further analysis suggested that a 50% reduction of industry or transportation emissions only caused a 6% decrease in the annual mean EC concentration, while a 50% reduction of residential emissions reduced the winter surface EC concentration by up to 25%. In respect to the serious air pollution problems (including EC pollution) in the GZ Basin, our findings can provide an insightful view on local air pollution control strategies. - Highlights: • A yearlong WRF-Chem simulation is conducted to identify sources of the EC pollution. • A network of

  17. Lagrangian multiforms and multidimensional consistency

    Energy Technology Data Exchange (ETDEWEB)

    Lobb, Sarah; Nijhoff, Frank [Department of Applied Mathematics, University of Leeds, Leeds LS2 9JT (United Kingdom)

    2009-10-30

    We show that well-chosen Lagrangians for a class of two-dimensional integrable lattice equations obey a closure relation when embedded in a higher dimensional lattice. On the basis of this property we formulate a Lagrangian description for such systems in terms of Lagrangian multiforms. We discuss the connection of this formalism with the notion of multidimensional consistency, and the role of the lattice from the point of view of the relevant variational principle.

  18. Repeatability and consistency of individual behaviour in juvenile and adult Eurasian harvest mice

    Science.gov (United States)

    Schuster, Andrea C.; Carl, Teresa; Foerster, Katharina

    2017-04-01

    Knowledge on animal personality has provided new insights into evolutionary biology and animal ecology, as behavioural types have been shown to affect fitness. Animal personality is characterized by repeatable and consistent between-individual behavioural differences throughout time and across different situations. Behavioural repeatability within life history stages and consistency between life history stages should be checked for the independence of sex and age, as recent data have shown that males and females in some species may differ in the repeatability of behavioural traits, as well as in their consistency. We measured the repeatability and consistency of three behavioural and one cognitive traits in juvenile and adult Eurasian harvest mice ( Micromys minutus). We found that exploration, activity and boldness were repeatable in juveniles and adults. Spatial recognition measured in a Y Maze was only repeatable in adult mice. Exploration, activity and boldness were consistent before and after maturation, as well as before and after first sexual contact. Data on spatial recognition provided little evidence for consistency. Further, we found some evidence for a litter effect on behaviours by comparing different linear mixed models. We concluded that harvest mice express animal personality traits as behaviours were repeatable across sexes and consistent across life history stages. The tested cognitive trait showed low repeatability and was less consistent across life history stages. Given the rising interest in individual variation in cognitive performance, and in its relationship to animal personality, we suggest that it is important to gather more data on the repeatability and consistency of cognitive traits.

  19. Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling

    International Nuclear Information System (INIS)

    Pera, H.; Kleijn, J. M.; Leermakers, F. A. M.

    2014-01-01

    To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k c and k ¯ and the preferred monolayer curvature J 0 m , and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k c and the area compression modulus k A are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k ¯ and J 0 m can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k ¯ and J 0 m change sign with relevant parameter changes. Although typically k ¯ 0 m ≫0, especially at low ionic strengths. We anticipate that these changes lead to unstable membranes as these become vulnerable to pore formation or disintegration into lipid disks

  20. Amazon Forests Maintain Consistent Canopy Structure and Greenness During the Dry Season

    Science.gov (United States)

    Morton, Douglas C.; Nagol, Jyoteshwar; Carabajal, Claudia C.; Rosette, Jacqueline; Palace, Michael; Cook, Bruce D.; Vermote, Eric F.; Harding, David J.; North, Peter R. J.

    2014-01-01

    The seasonality of sunlight and rainfall regulates net primary production in tropical forests. Previous studies have suggested that light is more limiting than water for tropical forest productivity, consistent with greening of Amazon forests during the dry season in satellite data.We evaluated four potential mechanisms for the seasonal green-up phenomenon, including increases in leaf area or leaf reflectance, using a sophisticated radiative transfer model and independent satellite observations from lidar and optical sensors. Here we show that the apparent green up of Amazon forests in optical remote sensing data resulted from seasonal changes in near-infrared reflectance, an artefact of variations in sun-sensor geometry. Correcting this bidirectional reflectance effect eliminated seasonal changes in surface reflectance, consistent with independent lidar observations and model simulations with unchanging canopy properties. The stability of Amazon forest structure and reflectance over seasonal timescales challenges the paradigm of light-limited net primary production in Amazon forests and enhanced forest growth during drought conditions. Correcting optical remote sensing data for artefacts of sun-sensor geometry is essential to isolate the response of global vegetation to seasonal and interannual climate variability.

  1. Image-based multiscale mechanical modeling shows the importance of structural heterogeneity in the human lumbar facet capsular ligament.

    Science.gov (United States)

    Zarei, Vahhab; Liu, Chao J; Claeson, Amy A; Akkin, Taner; Barocas, Victor H

    2017-08-01

    The lumbar facet capsular ligament (FCL) primarily consists of aligned type I collagen fibers that are mainly oriented across the joint. The aim of this study was to characterize and incorporate in-plane local fiber structure into a multiscale finite element model to predict the mechanical response of the FCL during in vitro mechanical tests, accounting for the heterogeneity in different scales. Characterization was accomplished by using entire-domain polarization-sensitive optical coherence tomography to measure the fiber structure of cadaveric lumbar FCLs ([Formula: see text]). Our imaging results showed that fibers in the lumbar FCL have a highly heterogeneous distribution and are neither isotropic nor completely aligned. The averaged fiber orientation was [Formula: see text] ([Formula: see text] in the inferior region and [Formula: see text] in the middle and superior regions), with respect to lateral-medial direction (superior-medial to inferior-lateral). These imaging data were used to construct heterogeneous structural models, which were then used to predict experimental gross force-strain behavior and the strain distribution during equibiaxial and strip biaxial tests. For equibiaxial loading, the structural model fit the experimental data well but underestimated the lateral-medial forces by [Formula: see text]16% on average. We also observed pronounced heterogeneity in the strain field, with stretch ratios for different elements along the lateral-medial axis of sample typically ranging from about 0.95 to 1.25 during a 12% strip biaxial stretch in the lateral-medial direction. This work highlights the multiscale structural and mechanical heterogeneity of the lumbar FCL, which is significant both in terms of injury prediction and microstructural constituents' (e.g., neurons) behavior.

  2. Factorial validity and internal consistency of the motivational climate in physical education scale.

    Science.gov (United States)

    Soini, Markus; Liukkonen, Jarmo; Watt, Anthony; Yli-Piipari, Sami; Jaakkola, Timo

    2014-01-01

    The aim of the study was to examine the construct validity and internal consistency of the Motivational Climate in Physical Education Scale (MCPES). A key element of the development process of the scale was establishing a theoretical framework that integrated the dimensions of task- and ego involving climates in conjunction with autonomy, and social relatedness supporting climates. These constructs were adopted from the self-determination and achievement goal theories. A sample of Finnish Grade 9 students, comprising 2,594 girls and 1,803 boys, completed the 18-item MCPES during one physical education class. The results of the study demonstrated that participants had highest mean in task-involving climate and the lowest in autonomy climate and ego-involving climate. Additionally, autonomy, social relatedness, and task- involving climates were significantly and strongly correlated with each other, whereas the ego- involving climate had low or negligible correlations with the other climate dimensions.The construct validity of the MCPES was analyzed using confirmatory factor analysis. The statistical fit of the four-factor model consisting of motivational climate factors supporting perceived autonomy, social relatedness, task-involvement, and ego-involvement was satisfactory. The results of the reliability analysis showed acceptable internal consistencies for all four dimensions. The Motivational Climate in Physical Education Scale can be considered as psychometrically valid tool to measure motivational climate in Finnish Grade 9 students. Key PointsThis study developed Motivational Climate in School Physical Education Scale (MCPES). During the development process of the scale, the theoretical framework using dimensions of task- and ego involving as well as autonomy, and social relatedness supporting climates was constructed. These constructs were adopted from the self-determination and achievement goal theories.The statistical fit of the four-factor model of the

  3. Consistency of the Hamiltonian formulation of the lowest-order effective action of the complete Horava theory

    International Nuclear Information System (INIS)

    Bellorin, Jorge; Restuccia, Alvaro

    2011-01-01

    We perform the Hamiltonian analysis for the lowest-order effective action, up to second order in derivatives, of the complete Horava theory. The model includes the invariant terms that depend on ∂ i lnN proposed by Blas, Pujolas, and Sibiryakov. We show that the algebra of constraints closes. The Hamiltonian constraint is of second-class behavior and it can be regarded as an elliptic partial differential equation for N. The linearized version of this equation is a Poisson equation for N that can be solved consistently. The preservation in time of the Hamiltonian constraint yields an equation that can be consistently solved for a Lagrange multiplier of the theory. The model has six propagating degrees of freedom in the phase space, corresponding to three even physical modes. When compared with the λR model studied by us in a previous paper, it lacks two second-class constraints, which leads to the extra even mode.

  4. THERMODYNAMIC DEPRESSION OF IONIZATION POTENTIALS IN NONIDEAL PLASMAS: GENERALIZED SELF-CONSISTENCY CRITERION AND A BACKWARD SCHEME FOR DERIVING THE EXCESS FREE ENERGY

    International Nuclear Information System (INIS)

    Zaghloul, Mofreh R.

    2009-01-01

    Accurate and consistent prediction of thermodynamic properties is of great importance in high-energy density physics and in modeling stellar atmospheres and interiors as well. Modern descriptions of thermodynamic properties of such nonideal plasma systems are sophisticated and/or full of pitfalls that make it difficult, if not impossible, to reproduce. The use of the Saha equation modified at high densities by incorporating simple expressions for depression of ionization potentials is very convenient in that context. However, as it is commonly known, the incorporation of ad hoc or empirical expressions for the depression of ionization potentials in the Saha equation leads to thermodynamic inconsistencies. The problem of thermodynamic consistency of ionization potentials depression in nonideal plasmas is investigated and a criterion is derived, which shows immediately, whether a particular model for the ionization potential depression is self-consistent, that is, whether it can be directly related to a modification of the free-energy function, or not. A backward scheme is introduced which can be utilized to derive nonideality corrections to the free-energy function from formulas of ionization potentials depression derived from plasma microfields or in ad hoc or empirical fashion provided that the aforementioned self-consistency criterion is satisfied. The value and usefulness of such a backward method are pointed out and discussed. The above-mentioned criterion is applied to investigate the thermodynamic consistency of some historic models in the literature and an optional routine is introduced to recover their thermodynamic consistencies while maintaining the same functional dependence on the species densities as in the original models. Sample computational problems showing the effect of the proposed modifications on the computed plasma composition are worked out and presented.

  5. The standard lateral gene transfer model is statistically consistent for pectinate four-taxon trees

    DEFF Research Database (Denmark)

    Sand, Andreas; Steel, Mike

    2013-01-01

    Evolutionary events such as incomplete lineage sorting and lateral gene transfers constitute major problems for inferring species trees from gene trees, as they can sometimes lead to gene trees which conflict with the underlying species tree. One particularly simple and efficient way to infer...... species trees from gene trees under such conditions is to combine three-taxon analyses for several genes using a majority vote approach. For incomplete lineage sorting this method is known to be statistically consistent; however, for lateral gene transfers it was recently shown that a zone...... of inconsistency exists for a specific four-taxon tree topology, and it was posed as an open question whether inconsistencies could exist for other four-taxon tree topologies? In this letter we analyze all remaining four-taxon topologies and show that no other inconsistencies exist....

  6. Self-consistent description of the isospin mixing

    International Nuclear Information System (INIS)

    Gabrakov, S.I.; Pyatov, N.I.; Baznat, M.I.; Salamov, D.I.

    1978-03-01

    The properties of collective 0 + states built of unlike particle-hole excitations in spherical nuclei have been investigated in a self-consistent microscopic approach. These states arise when the broken isospin symmetry of the nuclear shell model Hamiltonian is restored. The numerical calculations were performed with Woods-Saxon wave functions

  7. Why Different Drought Indexes Show Distinct Future Drought Risk Outcomes in the U.S. Great Plains?

    Science.gov (United States)

    Feng, S.; Hayes, M. J.; Trnka, M.

    2015-12-01

    Vigorous discussions and disagreements about the future changes in drought intensity in the US Great Plains have been taking place recently within the literature. These discussions have involved widely varying estimates based on drought indices and model-based projections of the future. To investigate and understand the causes for such a disparity between these previous estimates, we analyzed 10 commonly-used drought indexes using the output from 26 state-of-the-art climate models. These drought indices were computed using potential evapotranspiration estimated by the physically-based Penman-Monteith method (PE_pm) and the empirically-based Thornthwaite method (PE_th). The results showed that the short-term drought indicators are similar to modeled surface soil moisture and show a small but consistent drying trend in the future. The long-term drought indicators and the total column soil moisture, however, are consistent in projecting more intense future drought. When normalized, the drought indices with PE_th all show unprecedented and possibly unrealistic future drying, while the drought indices with PE_pm show comparable dryness with the modeled soil moisture. Additionally, the drought indices with PE_pm are closely related to soil moisture during both the 20th and 21st Centuries. Overall, the drought indices with PE_pm, as well as the modeled total column soil moisture, suggest a widespread and very significant drying of the Great Plains region toward the end of the Century. Our results suggested that the sharp contracts about future drought risk in the Great Plains discussed in previous studies are caused by 1) comparing the projected changes in short-term droughts with that of the long-term droughts, and/or 2) computing the atmospheric evaporative demand using the empirically-based method (e.g., PE_th). Our analysis may be applied for drought projections in other regions across the globe.

  8. Thermodynamically self-consistent theory for the Blume-Capel model.

    Science.gov (United States)

    Grollau, S; Kierlik, E; Rosinberg, M L; Tarjus, G

    2001-04-01

    We use a self-consistent Ornstein-Zernike approximation to study the Blume-Capel ferromagnet on three-dimensional lattices. The correlation functions and the thermodynamics are obtained from the solution of two coupled partial differential equations. The theory provides a comprehensive and accurate description of the phase diagram in all regions, including the wing boundaries in a nonzero magnetic field. In particular, the coordinates of the tricritical point are in very good agreement with the best estimates from simulation or series expansion. Numerical and analytical analysis strongly suggest that the theory predicts a universal Ising-like critical behavior along the lambda line and the wing critical lines, and a tricritical behavior governed by mean-field exponents.

  9. Do we really use rainfall observations consistent with reality in hydrological modelling?

    Science.gov (United States)

    Ciampalini, Rossano; Follain, Stéphane; Raclot, Damien; Crabit, Armand; Pastor, Amandine; Moussa, Roger; Le Bissonnais, Yves

    2017-04-01

    Spatial and temporal patterns in rainfall control how water reaches soil surface and interacts with soil properties (i.e., soil wetting, infiltration, saturation). Once a hydrological event is defined by a rainfall with its spatiotemporal variability and by some environmental parameters such as soil properties (including land use, topographic and anthropic features), the evidence shows that each parameter variation produces different, specific outputs (e.g., runoff, flooding etc.). In this study, we focus on the effect of rainfall patterns because, due to the difficulty to dispose of detailed data, their influence in modelling is frequently underestimated or neglected. A rainfall event affects a catchment non uniformly, it is spatially localized and its pattern moves in space and time. The way and the time how the water reaches the soil and saturates it respect to the geometry of the catchment deeply influences soil saturation, runoff, and then sediment delivery. This research, approaching a hypothetical, simple case, aims to stimulate the debate on the reliability of the rainfall quality used in hydrological / soil erosion modelling. We test on a small catchment of the south of France (Roujan, Languedoc Roussillon) the influence of rainfall variability with the use of a HD hybrid hydrological - soil erosion model, combining a cinematic wave with the St. Venant equation and a simplified "bucket" conceptual model for ground water, able to quantify the effect of different spatiotemporal patterns of a very-high-definition synthetic rainfall. Results indicate that rainfall spatiotemporal patterns are crucial simulating an erosive event: differences between spatially uniform rainfalls, as frequently adopted in simulations, and some hypothetical rainfall patterns here applied, reveal that the outcome of a simulated event can be highly underestimated.

  10. Self-consistent gyrokinetic modeling of neoclassical and turbulent impurity transport

    OpenAIRE

    Estève , D. ,; Sarazin , Y.; Garbet , X.; Grandgirard , V.; Breton , S. ,; Donnel , P. ,; Asahi , Y. ,; Bourdelle , C.; Dif-Pradalier , G; Ehrlacher , C.; Emeriau , C.; Ghendrih , Ph; Gillot , C.; Latu , G.; Passeron , C.

    2018-01-01

    International audience; Trace impurity transport is studied with the flux-driven gyrokinetic GYSELA code [V. Grandgirard et al., Comp. Phys. Commun. 207, 35 (2016)]. A reduced and linearized multi-species collision operator has been recently implemented, so that both neoclassical and turbulent transport channels can be treated self-consistently on an equal footing. In the Pfirsch-Schlüter regime likely relevant for tungsten, the standard expression of the neoclassical impurity flux is shown t...

  11. Student Effort, Consistency and Online Performance

    Directory of Open Access Journals (Sweden)

    Hilde Patron

    2011-07-01

    Full Text Available This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas effort, or total minutes spent online, is not. Other independent variables include GPA and the difference between a pre-test and a post-test. The GPA is used as a measure of motivation, and the difference between a post-test and pre-test as marginal learning. As expected, the level of motivation is found statistically significant at a 99% confidence level, and marginal learning is also significant at a 95% level.

  12. Overview of the Special Issue: A Multi-Model Framework to Achieve Consistent Evaluation of Climate Change Impacts in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Waldhoff, Stephanie T.; Martinich, Jeremy; Sarofim, Marcus; DeAngelo, B. J.; McFarland, Jim; Jantarasami, Lesley; Shouse, Kate C.; Crimmins, Allison; Ohrel, Sara; Li, Jia

    2015-07-01

    The Climate Change Impacts and Risk Analysis (CIRA) modeling exercise is a unique contribution to the scientific literature on climate change impacts, economic damages, and risk analysis that brings together multiple, national-scale models of impacts and damages in an integrated and consistent fashion to estimate climate change impacts, damages, and the benefits of greenhouse gas (GHG) mitigation actions in the United States. The CIRA project uses three consistent socioeconomic, emissions, and climate scenarios across all models to estimate the benefits of GHG mitigation policies: a Business As Usual (BAU) and two policy scenarios with radiative forcing (RF) stabilization targets of 4.5 W/m2 and 3.7 W/m2 in 2100. CIRA was also designed to specifically examine the sensitivity of results to uncertainties around climate sensitivity and differences in model structure. The goals of CIRA project are to 1) build a multi-model framework to produce estimates of multiple risks and impacts in the U.S., 2) determine to what degree risks and damages across sectors may be lowered from a BAU to policy scenarios, 3) evaluate key sources of uncertainty along the causal chain, and 4) provide information for multiple audiences and clearly communicate the risks and damages of climate change and the potential benefits of mitigation. This paper describes the motivations, goals, and design of the CIRA modeling exercise and introduces the subsequent papers in this special issue.

  13. Self-consistent calculation of steady-state creep and growth in textured zirconium

    International Nuclear Information System (INIS)

    Tome, C.N.; So, C.B.; Woo, C.H.

    1993-01-01

    Irradiation creep and growth in zirconium alloys result in anisotropic dimensional changes relative to the crystallographic axis in each individual grain. Several methods have been attempted to model such dimensional changes, taking into account the development of intergranular stresses. In this paper, we compare the predictions of several such models, namely the upper-bound, the lower-bound, the isotropic K* self-consistent (analytical) and the fully self-consistent (numerical) models. For given single-crystal creep compliances and growth factors, the polycrystal compliances predicted by the upper- and lower-bound models are unreliable. The predictions of the two self-consistent approaches are usually similar. The analytical isotropic K* approach is simple to implement and can be used to estimate the creep and growth rates of the polycrystal in many cases. The numerical fully self-consistent approach should be used when an accurate prediction of polycrystal creep is required, particularly for the important case of a closed-end internally pressurized tube. In most cases, the variations in grain shape introduce only minor corrections to the behaviour of polycrystalline materials. (author)

  14. The Devil in the Dark: A Fully Self-Consistent Seismic Model for Venus

    Science.gov (United States)

    Unterborn, C. T.; Schmerr, N. C.; Irving, J. C. E.

    2017-12-01

    The bulk composition and structure of Venus is unknown despite accounting for 40% of the mass of all the terrestrial planets in our Solar System. As we expand the scope of planetary science to include those planets around other stars, the lack of measurements of basic planetary properties such as moment of inertia, core-size and thermal profile for Venus hinders our ability to compare the potential uniqueness of the Earth and our Solar System to other planetary systems. Here we present fully self-consistent, whole-planet density and seismic velocity profiles calculated using the ExoPlex and BurnMan software packages for various potential Venusian compositions. Using these models, we explore the seismological implications of the different thermal and compositional initial conditions, taking into account phase transitions due to changes in pressure, temperature as well as composition. Using mass-radius constraints, we examine both the centre frequencies of normal mode oscillations and the waveforms and travel times of body waves. Seismic phases which interact with the core, phase transitions in the mantle, and shallower parts of Venus are considered. We also consider the detectability and transmission of these seismic waves from within the dense atmosphere of Venus. Our work provides coupled compositional-seismological reference models for the terrestrial planet in our Solar System of which we know the least. Furthermore, these results point to the potential wealth of fundamental scientific insights into Venus and Earth, as well as exoplanets, which could be gained by including a seismometer on future planetary exploration missions to Venus, the devil in the dark.

  15. Protective Factors, Risk Indicators, and Contraceptive Consistency Among College Women.

    Science.gov (United States)

    Morrison, Leslie F; Sieving, Renee E; Pettingell, Sandra L; Hellerstedt, Wendy L; McMorris, Barbara J; Bearinger, Linda H

    2016-01-01

    To explore risk and protective factors associated with consistent contraceptive use among emerging adult female college students and whether effects of risk indicators were moderated by protective factors. Secondary analysis of National Longitudinal Study of Adolescent to Adult Health Wave III data. Data collected through in-home interviews in 2001 and 2002. National sample of 18- to 25-year-old women (N = 842) attending 4-year colleges. We examined relationships between protective factors, risk indicators, and consistent contraceptive use. Consistent contraceptive use was defined as use all of the time during intercourse in the past 12 months. Protective factors included external supports of parental closeness and relationship with caring nonparental adult and internal assets of self-esteem, confidence, independence, and life satisfaction. Risk indicators included heavy episodic drinking, marijuana use, and depression symptoms. Multivariable logistic regression models were used to evaluate relationships between protective factors and consistent contraceptive use and between risk indicators and contraceptive use. Self-esteem, confidence, independence, and life satisfaction were significantly associated with more consistent contraceptive use. In a final model including all internal assets, life satisfaction was significantly related to consistent contraceptive use. Marijuana use and depression symptoms were significantly associated with less consistent use. With one exception, protective factors did not moderate relationships between risk indicators and consistent use. Based on our findings, we suggest that risk and protective factors may have largely independent influences on consistent contraceptive use among college women. A focus on risk and protective factors may improve contraceptive use rates and thereby reduce unintended pregnancy among college students. Copyright © 2016 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses. Published

  16. Self-consistent Random Phase Approximation applied to a schematic model of the field theory; Approximation des phases aleatoires self-consistante appliquee a un modele schematique de la theorie des champs

    Energy Technology Data Exchange (ETDEWEB)

    Bertrand, Thierry [Inst. de Physique Nucleaire, Lyon-1 Univ., 69 - Villeurbanne (France)

    1998-12-11

    The self-consistent Random Phase Approximation (SCRPA) is a method allowing in the mean-field theory inclusion of the correlations in the ground and excited states. It has the advantage of not violating the Pauli principle in contrast to RPA, that is based on the quasi-bosonic approximation; in addition, numerous applications in different domains of physics, show a possible variational character. However, the latter should be formally demonstrated. The first model studied with SCRPA is the anharmonic oscillator in the region where one of its symmetries is spontaneously broken. The ground state energy is reproduced by SCRPA more accurately than RPA, with no violation of the Ritz variational principle, what is not the case for the latter approximation. The success of SCRPA is the the same in case of ground state energy for a model mixing bosons and fermions. At the transition point the SCRPA is correcting RPA drastically, but far from this region the correction becomes negligible, both methods being of similar precision. In the deformed region in the case of RPA a spurious mode occurred due to the microscopical character of the model.. The SCRPA may also reproduce this mode very accurately and actually it coincides with an excitation in the exact spectrum 40 refs., 33 figs., 14 tabs.

  17. Calculation of the self-consistent current distribution and coupling of an RF antenna array

    International Nuclear Information System (INIS)

    Ballico, M.; Puri, S.

    1993-10-01

    A self-consistent calculation of the antenna current distribution and fields in an axisymmetric cylindrical geometry for the ICRH antenna-plasma coupling problem is presented. Several features distinguish this calculation from other codes presently available. 1. Variational form: The formulation of the self consistent antenna current problem in a variational form allows good convergence and stability of the algorithm. 2. Multiple straps: Allows modelling of (a) the current distribution across the width of the strap (by dividing it up into sub straps) (b) side limiters and septum (c) antenna cross-coupling. 3. Analytic calculation of the antenna field and calculation of the antenna self-consistent current distribution, (given the surface impedance matrix) gives rapid calculation. 4. Framed for parallel computation on several different parallel architectures (as well as serial) gives a large speed improvement to the user. Results are presented for both Alfven wave heating and current drive antenna arrays, showing the optimal coupling to be achieved for toroidal mode numbers 8< n<10 for typical ASDEX upgrade plasmas. Simulations of the ASDEX upgrade antenna show the importance of the current distribution across the antenna and of image currents flowing in the side limiters, and an analysis of a proposed asymmetric ITER antenna is presented. (orig.)

  18. Consistent Evolution of Software Artifacts and Non-Functional Models

    Science.gov (United States)

    2014-11-14

    induce bad software performance)? 15. SUBJECT TERMS EOARD, Nano particles, Photo-Acoustic Sensors, Model-Driven Engineering ( MDE ), Software Performance...Università degli Studi dell’Aquila, Via Vetoio, 67100 L’Aquila, Italy Email: vittorio.cortellessa@univaq.it Web : http: // www. di. univaq. it/ cortelle/ Phone...Model-Driven Engineering ( MDE ), Software Performance Engineering (SPE), Change Propagation, Performance Antipatterns. For sake of readability of the

  19. Phase models of galaxies consisting of disk and halo

    International Nuclear Information System (INIS)

    Osipkov, L.P.; Kutuzov, S.A.

    1987-01-01

    A method of finding the phase density of a two-component model of mass distribution is developed. The equipotential surfaces and the potential law are given. The equipotentials are lenslike surfaces with a sharp edge in the equatorial plane, which provides the existence of an imbedded thin disk in halo. The equidensity surfaces of the halo coincide with the equipotentials. Phase models for the halo and the disk are constructed separately on the basis of spatial and surface mass densities by solving the corresponding integral equations. In particular the models for the halo with finite dimensions can be constructed. The even part of the phase density in respect to velocities is only found. For the halo it depends on the energy integral as a single argument

  20. A self-consistent theory of the magnetic polaron

    International Nuclear Information System (INIS)

    Marvakov, D.I.; Kuzemsky, A.L.; Vlahov, J.P.

    1984-10-01

    A finite temperature self-consistent theory of magnetic polaron in the s-f model of ferromagnetic semiconductors is developed. The calculations are based on the novel approach of the thermodynamic two-time Green function methods. This approach consists in the introduction of the ''irreducible'' Green functions (IGF) and derivation of the exact Dyson equation and exact self-energy operator. It is shown that IGF method gives a unified and natural approach for a calculation of the magnetic polaron states by taking explicitly into account the damping effects and finite lifetime. (author)

  1. Self-consistent approach to x-ray reflection from rough surfaces

    International Nuclear Information System (INIS)

    Feranchuk, I. D.; Feranchuk, S. I.; Ulyanenkov, A. P.

    2007-01-01

    A self-consistent analytical approach for specular x-ray reflection from interfaces with transition layers [I. D. Feranchuk et al., Phys. Rev. B 67, 235417 (2003)] based on the distorted-wave Born approximation (DWBA) is used for the description of coherent and incoherent x-ray scattering from rough surfaces and interfaces. This approach takes into account the transformation of the modeling transition layer profile at the interface, which is caused by roughness correlations. The reflection coefficients for each DWBA order are directly calculated without phenomenological assumptions on their exponential decay at large scattering angles. Various regions of scattering angles are discussed, which show qualitatively different dependence of the reflection coefficient on the scattering angle. The experimental data are analyzed using the method developed

  2. Using open sidewalls for modelling self-consistent lithosphere subduction dynamics

    Directory of Open Access Journals (Sweden)

    M. V. Chertova

    2012-10-01

    Full Text Available Subduction modelling in regional model domains, in 2-D or 3-D, is commonly performed using closed (impermeable vertical boundaries. Here we investigate the merits of using open boundaries for 2-D modelling of lithosphere subduction. Our experiments are focused on using open and closed (free slip sidewalls while comparing results for two model aspect ratios of 3:1 and 6:1. Slab buoyancy driven subduction with open boundaries and free plates immediately develops into strong rollback with high trench retreat velocities and predominantly laminar asthenospheric flow. In contrast, free-slip sidewalls prove highly restrictive on subduction rollback evolution, unless the lithosphere plates are allowed to move away from the sidewalls. This initiates return flows pushing both plates toward the subduction zone speeding up subduction. Increasing the aspect ratio to 6:1 does not change the overall flow pattern when using open sidewalls but only the flow magnitude. In contrast, for free-slip boundaries, the slab evolution does change with respect to the 3:1 aspect ratio model and slab evolution does not resemble the evolution obtained with open boundaries using 6:1 aspect ratio. For models with open side boundaries, we could develop a flow-speed scaling based on energy dissipation arguments to convert between flow fields of different model aspect ratios. We have also investigated incorporating the effect of far-field generated lithosphere stress in our open boundary models. By applying realistic normal stress conditions to the strong part of the overriding plate at the sidewalls, we can transfer intraplate stress to influence subduction dynamics varying from slab roll-back, stationary subduction, to advancing subduction. The relative independence of the flow field on model aspect ratio allows for a smaller modelling domain. Open boundaries allow for subduction to evolve freely and avoid the adverse effects (e.g. forced return flows of free-slip boundaries. We

  3. Occlusion-Aware Fragment-Based Tracking With Spatial-Temporal Consistency.

    Science.gov (United States)

    Sun, Chong; Wang, Dong; Lu, Huchuan

    2016-08-01

    In this paper, we present a robust tracking method by exploiting a fragment-based appearance model with consideration of both temporal continuity and discontinuity information. From the perspective of probability theory, the proposed tracking algorithm can be viewed as a two-stage optimization problem. In the first stage, by adopting the estimated occlusion state as a prior, the optimal state of the tracked object can be obtained by solving an optimization problem, where the objective function is designed based on the classification score, occlusion prior, and temporal continuity information. In the second stage, we propose a discriminative occlusion model, which exploits both foreground and background information to detect the possible occlusion, and also models the consistency of occlusion labels among different frames. In addition, a simple yet effective training strategy is introduced during the model training (and updating) process, with which the effects of spatial-temporal consistency are properly weighted. The proposed tracker is evaluated by using the recent benchmark data set, on which the results demonstrate that our tracker performs favorably against other state-of-the-art tracking algorithms.

  4. Interfacial tension and wettability in water-carbon dioxide systems: Experiments and self-consistent field modeling

    NARCIS (Netherlands)

    Banerjee, S.; Hassenklover, E.; Kleijn, J.M.; Cohen Stuart, M.A.; Leermakers, F.A.M.

    2013-01-01

    This paper presents experimental and modeling results on water–CO2 interfacial tension (IFT) together with wettability studies of water on both hydrophilic and hydrophobic surfaces immersed in CO2. CO2–water interfacial tension (IFT) measurements showed that the IFT decreased with increasing

  5. Homogenization of linearly anisotropic scattering cross sections in a consistent B1 heterogeneous leakage model

    International Nuclear Information System (INIS)

    Marleau, G.; Debos, E.

    1998-01-01

    One of the main problems encountered in cell calculations is that of spatial homogenization where one associates to an heterogeneous cell an homogeneous set of cross sections. The homogenization process is in fact trivial when a totally reflected cell without leakage is fully homogenized since it involved only a flux-volume weighting of the isotropic cross sections. When anisotropic leakages models are considered, in addition to homogenizing isotropic cross sections, the anisotropic scattering cross section must also be considered. The simple option, which consists of using the same homogenization procedure for both the isotropic and anisotropic components of the scattering cross section, leads to inconsistencies between the homogeneous and homogenized transport equation. Here we will present a method for homogenizing the anisotropic scattering cross sections that will resolve these inconsistencies. (author)

  6. Hunted woolly monkeys (Lagothrix poeppigii show threat-sensitive responses to human presence.

    Directory of Open Access Journals (Sweden)

    Sarah Papworth

    Full Text Available Responding only to individuals of a predator species which display threatening behaviour allows prey species to minimise energy expenditure and other costs of predator avoidance, such as disruption of feeding. The threat sensitivity hypothesis predicts such behaviour in prey species. If hunted animals are unable to distinguish dangerous humans from non-dangerous humans, human hunting is likely to have a greater effect on prey populations as all human encounters should lead to predator avoidance, increasing stress and creating opportunity costs for exploited populations. We test the threat sensitivity hypothesis in wild Poeppigi's woolly monkeys (Lagothrix poeppigii in Yasuní National Park, Ecuador, by presenting human models engaging in one of three behaviours "hunting", "gathering" or "researching". These experiments were conducted at two sites with differing hunting pressures. Visibility, movement and vocalisations were recorded and results from two sites showed that groups changed their behaviours after being exposed to humans, and did so in different ways depending on the behaviour of the human model. Results at the site with higher hunting pressure were consistent with predictions based on the threat sensitivity hypothesis. Although results at the site with lower hunting pressure were not consistent with the results at the site with higher hunting pressure, groups at this site also showed differential responses to different human behaviours. These results provide evidence of threat-sensitive predator avoidance in hunted primates, which may allow them to conserve both time and energy when encountering humans which pose no threat.

  7. Self-consistent perturbed equilibrium with neoclassical toroidal torque in tokamaks

    International Nuclear Information System (INIS)

    Park, Jong-Kyu; Logan, Nikolas C.

    2017-01-01

    Toroidal torque is one of the most important consequences of non-axisymmetric fields in tokamaks. The well-known neoclassical toroidal viscosity (NTV) is due to the second-order toroidal force from anisotropic pressure tensor in the presence of these asymmetries. This work shows that the first-order toroidal force originating from the same anisotropic pressure tensor, despite having no flux surface average, can significantly modify the local perturbed force balance and thus must be included in perturbed equilibrium self-consistent with NTV. The force operator with an anisotropic pressure tensor is not self-adjoint when the NTV torque is finite and thus is solved directly for each component. This approach yields a modified, non-self-adjoint Euler-Lagrange equation that can be solved using a variety of common drift-kinetic models in generalized tokamak geometry. The resulting energy and torque integral provides a unique way to construct a torque response matrix, which contains all the information of self-consistent NTV torque profiles obtainable by applying non-axisymmetric fields to the plasma. This torque response matrix can then be used to systematically optimize non-axisymmetric field distributions for desired NTV profiles. Published by AIP Publishing.

  8. On the existence of consistent price systems

    DEFF Research Database (Denmark)

    Bayraktar, Erhan; Pakkanen, Mikko S.; Sayit, Hasanjan

    2014-01-01

    We formulate a sufficient condition for the existence of a consistent price system (CPS), which is weaker than the conditional full support condition (CFS). We use the new condition to show the existence of CPSs for certain processes that fail to have the CFS property. In particular this condition...

  9. Observable Signatures of Wind-driven Chemistry with a Fully Consistent Three-dimensional Radiative Hydrodynamics Model of HD 209458b

    Science.gov (United States)

    Drummond, B.; Mayne, N. J.; Manners, J.; Carter, A. L.; Boutle, I. A.; Baraffe, I.; Hébrard, É.; Tremblin, P.; Sing, D. K.; Amundsen, D. S.; Acreman, D.

    2018-03-01

    We present a study of the effect of wind-driven advection on the chemical composition of hot-Jupiter atmospheres using a fully consistent 3D hydrodynamics, chemistry, and radiative transfer code, the Met Office Unified Model (UM). Chemical modeling of exoplanet atmospheres has primarily been restricted to 1D models that cannot account for 3D dynamical processes. In this work, we couple a chemical relaxation scheme to the UM to account for the chemical interconversion of methane and carbon monoxide. This is done consistently with the radiative transfer meaning that departures from chemical equilibrium are included in the heating rates (and emission) and hence complete the feedback between the dynamics, thermal structure, and chemical composition. In this Letter, we simulate the well studied atmosphere of HD 209458b. We find that the combined effect of horizontal and vertical advection leads to an increase in the methane abundance by several orders of magnitude, which is directly opposite to the trend found in previous works. Our results demonstrate the need to include 3D effects when considering the chemistry of hot-Jupiter atmospheres. We calculate transmission and emission spectra, as well as the emission phase curve, from our simulations. We conclude that gas-phase nonequilibrium chemistry is unlikely to explain the model–observation discrepancy in the 4.5 μm Spitzer/IRAC channel. However, we highlight other spectral regions, observable with the James Webb Space Telescope, where signatures of wind-driven chemistry are more prominant.

  10. Self-consistency of a heterogeneous continuum porous medium representation of a fractured medium

    International Nuclear Information System (INIS)

    Hoch, A.R.; Jackson, C.P.; Todman, S.

    1998-01-01

    For many of the rocks that are, or have been, under investigation as potential host rocks for a radioactive waste repository, groundwater flow is considered to take place predominantly through discontinuities such as fractures. Although models of networks of discrete features (DFN models) would be the most realistic models for such rocks, calculations on large length scales would not be computationally practicable. A possible approach would be to use heterogeneous continuum porous-medium (CPM) models in which each block has an effective permeability appropriate to represent the network of features within the block. In order to build confidence in this approach, it is necessary to demonstrate that the approach is self-consistent, in the sense that if the effective permeability on a large length scale is derived using the CPM model, the result is close to the value derived directly from the underlying network model. It is also desirable to demonstrate self-consistency for the use of stochastic heterogeneous CPM models that are built as follows. The correlation structure of the effective permeability on the scale of the blocks is inferred by analysis of the effective permeabilities obtained from the underlying DFN model. Then realizations of the effective permeability within the domain of interest are generated on the basis of the correlation structure, rather than being obtained directly from the underlying DFN model. A study of self-consistency is presented for two very different underlying DFN models: one based on the properties of the Borrowdale Volcanic Group at Sellafield, and one based on the properties of the granite at Aespoe in Sweden. It is shown that, in both cases, the use of heterogeneous CPM models based directly on the DFN model is self-consistent, provided that care is taken in the evaluation of the effective permeability for the DFN models. It is also shown that the use of stochastic heterogeneous CPM models based on the correlation structure of the

  11. Brain Stimulation Reward Supports More Consistent and Accurate Rodent Decision-Making than Food Reward.

    Science.gov (United States)

    McMurray, Matthew S; Conway, Sineadh M; Roitman, Jamie D

    2017-01-01

    Animal models of decision-making rely on an animal's motivation to decide and its ability to detect differences among various alternatives. Food reinforcement, although commonly used, is associated with problematic confounds, especially satiety. Here, we examined the use of brain stimulation reward (BSR) as an alternative reinforcer in rodent models of decision-making and compared it with the effectiveness of sugar pellets. The discriminability of various BSR frequencies was compared to differing numbers of sugar pellets in separate free-choice tasks. We found that BSR was more discriminable and motivated greater task engagement and more consistent preference for the larger reward. We then investigated whether rats prefer BSR of varying frequencies over sugar pellets. We found that animals showed either a clear preference for sugar reward or no preference between reward modalities, depending on the frequency of the BSR alternative and the size of the sugar reward. Overall, these results suggest that BSR is an effective reinforcer in rodent decision-making tasks, removing food-related confounds and resulting in more accurate, consistent, and reliable metrics of choice.

  12. Laboratory simulations show diabatic heating drives cumulus-cloud evolution and entrainment

    Science.gov (United States)

    Narasimha, Roddam; Diwan, Sourabh Suhas; Duvvuri, Subrahmanyam; Sreenivas, K. R.; Bhat, G. S.

    2011-01-01

    Clouds are the largest source of uncertainty in climate science, and remain a weak link in modeling tropical circulation. A major challenge is to establish connections between particulate microphysics and macroscale turbulent dynamics in cumulus clouds. Here we address the issue from the latter standpoint. First we show how to create bench-scale flows that reproduce a variety of cumulus-cloud forms (including two genera and three species), and track complete cloud life cycles—e.g., from a “cauliflower” congestus to a dissipating fractus. The flow model used is a transient plume with volumetric diabatic heating scaled dynamically to simulate latent-heat release from phase changes in clouds. Laser-based diagnostics of steady plumes reveal Riehl–Malkus type protected cores. They also show that, unlike the constancy implied by early self-similar plume models, the diabatic heating raises the Taylor entrainment coefficient just above cloud base, depressing it at higher levels. This behavior is consistent with cloud-dilution rates found in recent numerical simulations of steady deep convection, and with aircraft-based observations of homogeneous mixing in clouds. In-cloud diabatic heating thus emerges as the key driver in cloud development, and could well provide a major link between microphysics and cloud-scale dynamics. PMID:21918112

  13. Hyperspectral band selection based on consistency-measure of neighborhood rough set theory

    International Nuclear Information System (INIS)

    Liu, Yao; Xie, Hong; Wang, Liguo; Tan, Kezhu; Chen, Yuehua; Xu, Zhen

    2016-01-01

    Band selection is a well-known approach for reducing dimensionality in hyperspectral imaging. In this paper, a band selection method based on consistency-measure of neighborhood rough set theory (CMNRS) was proposed to select informative bands from hyperspectral images. A decision-making information system was established by the reflection spectrum of soybeans’ hyperspectral data between 400 nm and 1000 nm wavelengths. The neighborhood consistency-measure, which reflects not only the size of the decision positive region, but also the sample distribution in the boundary region, was used as the evaluation function of band significance. The optimal band subset was selected by a forward greedy search algorithm. A post-pruning strategy was employed to overcome the over-fitting problem and find the minimum subset. To assess the effectiveness of the proposed band selection technique, two classification models (extreme learning machine (ELM) and random forests (RF)) were built. The experimental results showed that the proposed algorithm can effectively select key bands and obtain satisfactory classification accuracy. (paper)

  14. Using open sidewalls for modelling self-consistent lithosphere subduction dynamics

    NARCIS (Netherlands)

    Chertova, M.V.; Geenen, T.; van den Berg, A.; Spakman, W.

    2012-01-01

    Subduction modelling in regional model domains, in 2-D or 3-D, is commonly performed using closed (impermeable) vertical boundaries. Here we investigate the merits of using open boundaries for 2-D modelling of lithosphere subduction. Our experiments are focused on using open and closed (free

  15. Consistency Analysis of Nearest Subspace Classifier

    OpenAIRE

    Wang, Yi

    2015-01-01

    The Nearest subspace classifier (NSS) finds an estimation of the underlying subspace within each class and assigns data points to the class that corresponds to its nearest subspace. This paper mainly studies how well NSS can be generalized to new samples. It is proved that NSS is strongly consistent under certain assumptions. For completeness, NSS is evaluated through experiments on various simulated and real data sets, in comparison with some other linear model based classifiers. It is also ...

  16. Stereotypes help people connect with others in the community: a situated functional analysis of the stereotype consistency bias in communication.

    Science.gov (United States)

    Clark, Anna E; Kashima, Yoshihisa

    2007-12-01

    Communicators tend to share more stereotype-consistent than stereotype-inconsistent information. The authors propose and test a situated functional model of this stereotype consistency bias: stereotype-consistent and inconsistent information differentially serve 2 central functions of communication--sharing information and regulating relationships; depending on the communication context, information seen to serve these different functions better is more likely communicated. Results showed that stereotype-consistent information is perceived as more socially connective but less informative than inconsistent information, and when the stereotype is perceived to be highly shared in the community, more stereotype-consistent than inconsistent information is communicated due to its greater social connectivity function. These results highlight the need to examine communication as a dynamic and situated social activity. (c) 2007 APA, all rights reserved.

  17. A consistent meson-theoretic description of the NN-interaction

    International Nuclear Information System (INIS)

    Machleidt, R.

    1985-01-01

    In this paper, the meson-theory of the NN-interaction is performed consistently. All irreducible diagrams up to a total exchanged mass of about 1 GeV (i. e. up to the cutoff region) are taken into account. These diagrams contain in particular an explicit field theoretic model for the 2π-exchange taking into account virtual Δ-excitation and direct π π-interaction. This part of the model agrees quantitatively with results obtained from dispersion theory which in turn are based on the analysis of πN- and π π-scattering data. A detailed description of the lower partial wave phase-shifts of NN-scattering requires the introduction of irreducible diagrams containing also heavy boson exchange, in particular the combination of π and rho. In the framework of this consistent meson theory an accurate description of the NN-scattering data below 300 MeV laboratory energy as well as the deuteron data is achieved; the numerical results are superior to those of simplified boson exchange models

  18. Self-consistent calculation of 208Pb spectrum

    International Nuclear Information System (INIS)

    Pal'chik, V.V.; Pyatov, N.I.; Fayans, S.A.

    1981-01-01

    The self-consistent model with exact accounting for one-particle continuum is applied to calculate all discrete particle-hole natural parity states with 2 208 Pb nucleus (up to the neutron emission threshold, 7.4 MeV). Contributions to the energy-weighted sum rules S(EL) of the first collective levels and total contributions of all discrete levels are evaluated. Most strongly the collectivization is manifested for octupole states. With multipolarity growth L contributions of discrete levels are sharply reduced. The results are compared with other models and the experimental data obtained in (e, e'), (p, p') reactions and other data [ru

  19. A thermodynamically consistent model of shape-memory alloys

    Czech Academy of Sciences Publication Activity Database

    Benešová, Barbora

    2011-01-01

    Roč. 11, č. 1 (2011), s. 355-356 ISSN 1617-7061 R&D Projects: GA ČR GAP201/10/0357 Institutional research plan: CEZ:AV0Z20760514 Keywords : slape memory alloys * model based on relaxation * thermomechanic coupling Subject RIV: BA - General Mathematics http://onlinelibrary.wiley.com/doi/10.1002/pamm.201110169/abstract

  20. Assessing the reliability of predictive activity coefficient models for molecules consisting of several functional groups

    Directory of Open Access Journals (Sweden)

    R. P. Gerber

    2013-03-01

    Full Text Available Currently, the most successful predictive models for activity coefficients are those based on functional groups such as UNIFAC. In contrast, these models require a large amount of experimental data for the determination of their parameter matrix. A more recent alternative is the models based on COSMO, for which only a small set of universal parameters must be calibrated. In this work, a recalibrated COSMO-SAC model was compared with the UNIFAC (Do model employing experimental infinite dilution activity coefficient data for 2236 non-hydrogen-bonding binary mixtures at different temperatures. As expected, UNIFAC (Do presented better overall performance, with a mean absolute error of 0.12 ln-units against 0.22 for our COSMO-SAC implementation. However, in cases involving molecules with several functional groups or when functional groups appear in an unusual way, the deviation for UNIFAC was 0.44 as opposed to 0.20 for COSMO-SAC. These results show that COSMO-SAC provides more reliable predictions for multi-functional or more complex molecules, reaffirming its future prospects.

  1. Consistency of hand preference: predictions to intelligence and school achievement.

    Science.gov (United States)

    Kee, D W; Gottfried, A; Bathurst, K

    1991-05-01

    Gottfried and Bathurst (1983) reported that hand preference consistency measured over time during infancy and early childhood predicts intellectual precocity for females, but not for males. In the present study longitudinal assessments of children previously classified by Gottfried and Bathurst as consistent or nonconsistent in cross-time hand preference were conducted during middle childhood (ages 5 to 9). Findings show that (a) early measurement of hand preference consistency for females predicts school-age intellectual precocity, (b) the locus of the difference between consistent vs. nonconsistent females is in verbal intelligence, and (c) the precocity of the consistent females was also revealed on tests of school achievement, particularly tests of reading and mathematics.

  2. Description of nucleon scattering on 208Pb by a fully Lane-consistent dispersive spherical optical model potential

    Science.gov (United States)

    Sun, W. L.; Wang, J.; Soukhovitskii, E. Sh.; Capote, R.; Quesada, J. M.

    2017-09-01

    A fully Lane-consistent dispersive spherical optical potential is proposed to describe nucleon scattering interaction with doubly magic nucleus 208Pb up to 200 MeV. The experimental neutron total cross sections, elastically scattered nucleon angular distributions and (p,n) data had been used to search the potential parameters. Good agreement between experiments and the calculations with this potential is observed. Meanwhile, the application of the determined optical potential with the same parameters to neighbouring near magic Pb-Bi isotopes is also examined to show the predictive power of this potential.

  3. KEEFEKTIFAN MODEL SHOW NOT TELL DAN MIND MAP PADA PEMBELAJARAN MENULIS TEKS EKSPOSISI BERDASARKAN MINAT PESERTA DIDIK KELAS X SMK

    Directory of Open Access Journals (Sweden)

    Wiwit Lili Sokhipah

    2015-03-01

    Full Text Available Tujuan penelitian ini adalah (1 menentukan keefektifan penggunaan model show not tell pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK Kelas X, (2 menentukan keefektifan penggunaan model mind map pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK kelas X, (3 menentukan keefektifan interaksi show not tell dan mind map pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK kelas X. Penelitian ini adalah quasi experimental design (pretes-postes control group design. Dalam desain ini terdapat dua kelompok eksperimen yakni penerapan model show not tell dalam pembelajaran keterampilan menulis teks eksposisipeserta didik dengan minat tinggi dan penerapan model mind map dalam pembelajaran keterampilan menulis teks eksposisi  peserta didik dengan minat rendah. Hasil penelitian adalah (1 model show not tell efektif digunakan  dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat tinggi, (2 model mind map efektif digunakan dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat rendah, dan (3 model show not tell lebih efektif digunakan dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat tinggi, sedangkan model mind map efektif digunakan dalam membelajarkan teks eksposisi pagi peserta didik yang memiliki minat rendah.

  4. Autonomous Navigation with Constrained Consistency for C-Ranger

    Directory of Open Access Journals (Sweden)

    Shujing Zhang

    2014-06-01

    Full Text Available Autonomous underwater vehicles (AUVs have become the most widely used tools for undertaking complex exploration tasks in marine environments. Their synthetic ability to carry out localization autonomously and build an environmental map concurrently, in other words, simultaneous localization and mapping (SLAM, are considered to be pivotal requirements for AUVs to have truly autonomous navigation. However, the consistency problem of the SLAM system has been greatly ignored during the past decades. In this paper, a consistency constrained extended Kalman filter (EKF SLAM algorithm, applying the idea of local consistency, is proposed and applied to the autonomous navigation of the C-Ranger AUV, which is developed as our experimental platform. The concept of local consistency (LC is introduced after an explicit theoretical derivation of the EKF-SLAM system. Then, we present a locally consistency-constrained EKF-SLAM design, LC-EKF, in which the landmark estimates used for linearization are fixed at the beginning of each local time period, rather than evaluated at the latest landmark estimates. Finally, our proposed LC-EKF algorithm is experimentally verified, both in simulations and sea trials. The experimental results show that the LC-EKF performs well with regard to consistency, accuracy and computational efficiency.

  5. Producing physically consistent and bias free extreme precipitation events over the Switzerland: Bridging gaps between meteorology and impact models

    Science.gov (United States)

    José Gómez-Navarro, Juan; Raible, Christoph C.; Blumer, Sandro; Martius, Olivia; Felder, Guido

    2016-04-01

    Extreme precipitation episodes, although rare, are natural phenomena that can threat human activities, especially in areas densely populated such as Switzerland. Their relevance demands the design of public policies that protect public assets and private property. Therefore, increasing the current understanding of such exceptional situations is required, i.e. the climatic characterisation of their triggering circumstances, severity, frequency, and spatial distribution. Such increased knowledge shall eventually lead us to produce more reliable projections about the behaviour of these events under ongoing climate change. Unfortunately, the study of extreme situations is hampered by the short instrumental record, which precludes a proper characterization of events with return period exceeding few decades. This study proposes a new approach that allows studying storms based on a synthetic, but physically consistent database of weather situations obtained from a long climate simulation. Our starting point is a 500-yr control simulation carried out with the Community Earth System Model (CESM). In a second step, this dataset is dynamically downscaled with the Weather Research and Forecasting model (WRF) to a final resolution of 2 km over the Alpine area. However, downscaling the full CESM simulation at such high resolution is infeasible nowadays. Hence, a number of case studies are previously selected. This selection is carried out examining the precipitation averaged in an area encompassing Switzerland in the ESM. Using a hydrological criterion, precipitation is accumulated in several temporal windows: 1 day, 2 days, 3 days, 5 days and 10 days. The 4 most extreme events in each category and season are selected, leading to a total of 336 days to be simulated. The simulated events are affected by systematic biases that have to be accounted before this data set can be used as input in hydrological models. Thus, quantile mapping is used to remove such biases. For this task

  6. Tokyo Motor Show 2003; Tokyo Motor Show 2003

    Energy Technology Data Exchange (ETDEWEB)

    Joly, E.

    2004-01-01

    The text which follows present the different techniques exposed during the 37. Tokyo Motor Show. The report points out the great tendencies of developments of the Japanese automobile industry. The hybrid electric-powered vehicles or those equipped with fuel cells have been highlighted by the Japanese manufacturers which allow considerable budgets in the research of less polluting vehicles. The exposed models, although being all different according to the manufacturer, use always a hybrid system: fuel cell/battery. The manufacturers have stressed too on the intelligent systems for navigation and safety as well as on the design and comfort. (O.M.)

  7. Self-consistent semi-analytic models of the first stars

    Science.gov (United States)

    Visbal, Eli; Haiman, Zoltán; Bryan, Greg L.

    2018-04-01

    We have developed a semi-analytic framework to model the large-scale evolution of the first Population III (Pop III) stars and the transition to metal-enriched star formation. Our model follows dark matter haloes from cosmological N-body simulations, utilizing their individual merger histories and three-dimensional positions, and applies physically motivated prescriptions for star formation and feedback from Lyman-Werner (LW) radiation, hydrogen ionizing radiation, and external metal enrichment due to supernovae winds. This method is intended to complement analytic studies, which do not include clustering or individual merger histories, and hydrodynamical cosmological simulations, which include detailed physics, but are computationally expensive and have limited dynamic range. Utilizing this technique, we compute the cumulative Pop III and metal-enriched star formation rate density (SFRD) as a function of redshift at z ≥ 20. We find that varying the model parameters leads to significant qualitative changes in the global star formation history. The Pop III star formation efficiency and the delay time between Pop III and subsequent metal-enriched star formation are found to have the largest impact. The effect of clustering (i.e. including the three-dimensional positions of individual haloes) on various feedback mechanisms is also investigated. The impact of clustering on LW and ionization feedback is found to be relatively mild in our fiducial model, but can be larger if external metal enrichment can promote metal-enriched star formation over large distances.

  8. Self-consistent RPA and the time-dependent density matrix approach

    Energy Technology Data Exchange (ETDEWEB)

    Schuck, P. [Institut de Physique Nucleaire, Orsay (France); CNRS et Universite Joseph Fourier, Laboratoire de Physique et Modelisation des Milieux Condenses, Grenoble (France); Tohyama, M. [Kyorin University School of Medicine, Mitaka, Tokyo (Japan)

    2016-10-15

    The time-dependent density matrix (TDDM) or BBGKY (Bogoliubov, Born, Green, Kirkwood, Yvon) approach is decoupled and closed at the three-body level in finding a natural representation of the latter in terms of a quadratic form of two-body correlation functions. In the small amplitude limit an extended RPA coupled to an also extended second RPA is obtained. Since including two-body correlations means that the ground state cannot be a Hartree-Fock state, naturally the corresponding RPA is upgraded to Self-Consistent RPA (SCRPA) which was introduced independently earlier and which is built on a correlated ground state. SCRPA conserves all the properties of standard RPA. Applications to the exactly solvable Lipkin and the 1D Hubbard models show good performances of SCRPA and TDDM. (orig.)

  9. Consistent interpretation of neutron-induced charged-particle emission in silicon

    International Nuclear Information System (INIS)

    Hermsdorf, D.

    1982-06-01

    Users requesting gas production cross sections for Silicon will be confronted with serious discrepancies taking evaluated data as well as experimental ones. To clarify the accuracies achieved at present in experiments and evaluations in this paper an intercomparison of different evaluated nuclear data files has been carried out resulting in recommendations for improvements of these files. The analysis of the experimental data base also shows contradictory measurements or in most cases a lack of data. So an interpretation of reliable measured data in terms of nuclear reaction theories has been done using statistical and direct reaction mechanism models. This study results in a consistent and comprehensive evaluated data set for neutron-induced charged-particle production in Silicon which will be incorporated in file 2015 of the SOKRATOR library. (author)

  10. Comparing the strength of behavioural plasticity and consistency across situations: animal personalities in the hermit crab Pagurus bernhardus.

    Science.gov (United States)

    Briffa, Mark; Rundle, Simon D; Fryer, Adam

    2008-06-07

    Many phenotypic traits show plasticity but behaviour is often considered the 'most plastic' aspect of phenotype as it is likely to show the quickest response to temporal changes in conditions or 'situation'. However, it has also been noted that constraints on sensory acuity, cognitive structure and physiological capacities place limits on behavioural plasticity. Such limits to plasticity may generate consistent differences in behaviour between individuals from the same population. It has recently been suggested that these consistent differences in individual behaviour may be adaptive and the term 'animal personalities' has been used to describe them. In many cases, however, a degree of both behavioural plasticity and relative consistency is probable. To understand the possible functions of animal personalities, it is necessary to determine the relative strength of each tendency and this may be achieved by comparison of statistical effect sizes for tests of difference and concordance. Here, we describe a new statistical framework for making such comparisons and investigate cross-situational plasticity and consistency in the duration of startle responses in the European hermit crab Pagurus bernhardus, in the field and the laboratory. The effect sizes of tests for behavioural consistency were greater than for tests of behavioural plasticity, indicating for the first time the presence of animal personalities in a crustacean model.

  11. Consistency conditions for data base systems: a new problem of systems analysis

    International Nuclear Information System (INIS)

    Schlageter, G.

    1976-01-01

    A data base can be seen as a model of a system in the real world. During the systems analysis conditions must be derived which guarantee a close correspondence between the real system and the data base. These conditions are called consistency constraints. The notion of consistency is analyzed; different types of consistency constraints are presented. (orig.) [de

  12. Two-particle irreducible effective actions versus resummation: Analytic properties and self-consistency

    Directory of Open Access Journals (Sweden)

    Michael Brown

    2015-11-01

    Full Text Available Approximations based on two-particle irreducible (2PI effective actions (also known as Φ-derivable, Cornwall–Jackiw–Tomboulis or Luttinger–Ward functionals depending on context have been widely used in condensed matter and non-equilibrium quantum/statistical field theory because this formalism gives a robust, self-consistent, non-perturbative and systematically improvable approach which avoids problems with secular time evolution. The strengths of 2PI approximations are often described in terms of a selective resummation of Feynman diagrams to infinite order. However, the Feynman diagram series is asymptotic and summation is at best a dangerous procedure. Here we show that, at least in the context of a toy model where exact results are available, the true strength of 2PI approximations derives from their self-consistency rather than any resummation. This self-consistency allows truncated 2PI approximations to capture the branch points of physical amplitudes where adjustments of coupling constants can trigger an instability of the vacuum. This, in effect, turns Dyson's argument for the failure of perturbation theory on its head. As a result we find that 2PI approximations perform better than Padé approximation and are competitive with Borel–Padé resummation. Finally, we introduce a hybrid 2PI–Padé method.

  13. Where do golf driver swings go wrong? Factors influencing driver swing consistency.

    Science.gov (United States)

    Zhang, X; Shan, G

    2014-10-01

    One of the challenging skills in golfing is the driver swing. There have been a large number of studies characterizing golf swings, yielding insightful instructions on how to swing well. As a result, achieving a sub-18 handicap is no longer the top problem for golfers. Instead, players are now most troubled by a lack of consistency during swing execution. The goal of this study was to determine how to consistently execute good golf swings. Using 3D motion capture and full-body biomechanical modeling, 22 experienced golfers were analysed. For characterizing both successful and failed swings, 19 selected parameters (13 angles, 4 time parameters, and 2 distances) were used. The results showed that 14 parameters are highly sensitive and/or prone to motor control variations. These parameters sensitized five distinct areas of swing to variation: (a) ball positioning, (b) transverse club angle, (c) transition, (d) wrist control, and (e) posture migration between takeaway and impact. Suggestions were provided for how to address these five distinct problem areas. We hope our findings on how to achieve consistency in golf swings will benefit all levels of golf pedagogy and help maintain/develop interests to involve more golf/physical activity for a healthy lifestyle. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Consistency of the postulates of special relativity

    International Nuclear Information System (INIS)

    Gron, O.; Nicola, M.

    1976-01-01

    In a recent article in this journal, Kingsley has tried to show that the postulates of special relativity contradict each other. It is shown that the arguments of Kingsley are invalid because of an erroneous appeal to symmetry in a nonsymmetric situation. The consistency of the postulates of special relativity and the relativistic kinematics deduced from them is restated

  15. A NEW ALGORITHM FOR SELF-CONSISTENT THREE-DIMENSIONAL MODELING OF COLLISIONS IN DUSTY DEBRIS DISKS

    International Nuclear Information System (INIS)

    Stark, Christopher C.; Kuchner, Marc J.

    2009-01-01

    We present a new 'collisional grooming' algorithm that enables us to model images of debris disks where the collision time is less than the Poynting-Robertson (PR) time for the dominant grain size. Our algorithm uses the output of a collisionless disk simulation to iteratively solve the mass flux equation for the density distribution of a collisional disk containing planets in three dimensions. The algorithm can be run on a single processor in ∼1 hr. Our preliminary models of disks with resonant ring structures caused by terrestrial mass planets show that the collision rate for background particles in a ring structure is enhanced by a factor of a few compared to the rest of the disk, and that dust grains in or near resonance have even higher collision rates. We show how collisions can alter the morphology of a resonant ring structure by reducing the sharpness of a resonant ring's inner edge and by smearing out azimuthal structure. We implement a simple prescription for particle fragmentation and show how PR drag and fragmentation sort particles by size, producing smaller dust grains at smaller circumstellar distances. This mechanism could cause a disk to look different at different wavelengths, and may explain the warm component of dust interior to Fomalhaut's outer dust ring seen in the resolved 24 μm Spitzer image of this system.

  16. Cognitive consistency and math-gender stereotypes in Singaporean children.

    Science.gov (United States)

    Cvencek, Dario; Meltzoff, Andrew N; Kapur, Manu

    2014-01-01

    In social psychology, cognitive consistency is a powerful principle for organizing psychological concepts. There have been few tests of cognitive consistency in children and no research about cognitive consistency in children from Asian cultures, who pose an interesting developmental case. A sample of 172 Singaporean elementary school children completed implicit and explicit measures of math-gender stereotype (male=math), gender identity (me=male), and math self-concept (me=math). Results showed strong evidence for cognitive consistency; the strength of children's math-gender stereotypes, together with their gender identity, significantly predicted their math self-concepts. Cognitive consistency may be culturally universal and a key mechanism for developmental change in social cognition. We also discovered that Singaporean children's math-gender stereotypes increased as a function of age and that boys identified with math more strongly than did girls despite Singaporean girls' excelling in math. The results reveal both cultural universals and cultural variation in developing social cognition. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Predicting knee replacement damage in a simulator machine using a computational model with a consistent wear factor.

    Science.gov (United States)

    Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J

    2008-02-01

    Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial

  18. An energetically consistent vertical mixing parameterization in CCSM4

    DEFF Research Database (Denmark)

    Nielsen, Søren Borg; Jochum, Markus; Eden, Carsten

    2018-01-01

    An energetically consistent stratification-dependent vertical mixing parameterization is implemented in the Community Climate System Model 4 and forced with energy conversion from the barotropic tides to internal waves. The structures of the resulting dissipation and diffusivity fields are compared......, however, depends greatly on the details of the vertical mixing parameterizations, where the new energetically consistent parameterization results in low thermocline diffusivities and a sharper and shallower thermocline. It is also investigated if the ocean state is more sensitive to a change in forcing...

  19. ACHIEVING CONSISTENT DOPPLER MEASUREMENTS FROM SDO /HMI VECTOR FIELD INVERSIONS

    International Nuclear Information System (INIS)

    Schuck, Peter W.; Antiochos, S. K.; Leka, K. D.; Barnes, Graham

    2016-01-01

    NASA’s Solar Dynamics Observatory is delivering vector magnetic field observations of the full solar disk with unprecedented temporal and spatial resolution; however, the satellite is in a highly inclined geosynchronous orbit. The relative spacecraft–Sun velocity varies by ±3 km s −1 over a day, which introduces major orbital artifacts in the Helioseismic Magnetic Imager (HMI) data. We demonstrate that the orbital artifacts contaminate all spatial and temporal scales in the data. We describe a newly developed three-stage procedure for mitigating these artifacts in the Doppler data obtained from the Milne–Eddington inversions in the HMI pipeline. The procedure ultimately uses 32 velocity-dependent coefficients to adjust 10 million pixels—a remarkably sparse correction model given the complexity of the orbital artifacts. This procedure was applied to full-disk images of AR 11084 to produce consistent Dopplergrams. The data adjustments reduce the power in the orbital artifacts by 31 dB. Furthermore, we analyze in detail the corrected images and show that our procedure greatly improves the temporal and spectral properties of the data without adding any new artifacts. We conclude that this new procedure makes a dramatic improvement in the consistency of the HMI data and in its usefulness for precision scientific studies.

  20. Achieving Consistent Doppler Measurements from SDO/HMI Vector Field Inversions

    Science.gov (United States)

    Schuck, Peter W.; Antiochos, S. K.; Leka, K. D.; Barnes, Graham

    2016-01-01

    NASA's Solar Dynamics Observatory is delivering vector magnetic field observations of the full solar disk with unprecedented temporal and spatial resolution; however, the satellite is in a highly inclined geosynchronous orbit. The relative spacecraft-Sun velocity varies by +/-3 kms-1 over a day, which introduces major orbital artifacts in the Helioseismic Magnetic Imager (HMI) data. We demonstrate that the orbital artifacts contaminate all spatial and temporal scales in the data. We describe a newly developed three-stage procedure for mitigating these artifacts in the Doppler data obtained from the Milne-Eddington inversions in the HMI pipeline. The procedure ultimately uses 32 velocity-dependent coefficients to adjust 10 million pixels-a remarkably sparse correction model given the complexity of the orbital artifacts. This procedure was applied to full-disk images of AR 11084 to produce consistent Dopplergrams. The data adjustments reduce the power in the orbital artifacts by 31 dB. Furthermore, we analyze in detail the corrected images and show that our procedure greatly improves the temporal and spectral properties of the data without adding any new artifacts. We conclude that this new procedure makes a dramatic improvement in the consistency of the HMI data and in its usefulness for precision scientific studies.

  1. Consistency relations in effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Munshi, Dipak; Regan, Donough, E-mail: D.Munshi@sussex.ac.uk, E-mail: D.Regan@sussex.ac.uk [Astronomy Centre, School of Mathematical and Physical Sciences, University of Sussex, Brighton BN1 9QH (United Kingdom)

    2017-06-01

    The consistency relations in large scale structure relate the lower-order correlation functions with their higher-order counterparts. They are direct outcome of the underlying symmetries of a dynamical system and can be tested using data from future surveys such as Euclid. Using techniques from standard perturbation theory (SPT), previous studies of consistency relation have concentrated on continuity-momentum (Euler)-Poisson system of an ideal fluid. We investigate the consistency relations in effective field theory (EFT) which adjusts the SPT predictions to account for the departure from the ideal fluid description on small scales. We provide detailed results for the 3D density contrast δ as well as the scaled divergence of velocity θ-bar . Assuming a ΛCDM background cosmology, we find the correction to SPT results becomes important at k ∼> 0.05 h/Mpc and that the suppression from EFT to SPT results that scales as square of the wave number k , can reach 40% of the total at k ≈ 0.25 h/Mpc at z = 0. We have also investigated whether effective field theory corrections to models of primordial non-Gaussianity can alter the squeezed limit behaviour, finding the results to be rather insensitive to these counterterms. In addition, we present the EFT corrections to the squeezed limit of the bispectrum in redshift space which may be of interest for tests of theories of modified gravity.

  2. Large rainfall changes consistently projected over substantial areas of tropical land

    Science.gov (United States)

    Chadwick, Robin; Good, Peter; Martin, Gill; Rowell, David P.

    2016-02-01

    Many tropical countries are exceptionally vulnerable to changes in rainfall patterns, with floods or droughts often severely affecting human life and health, food and water supplies, ecosystems and infrastructure. There is widespread disagreement among climate model projections of how and where rainfall will change over tropical land at the regional scales relevant to impacts, with different models predicting the position of current tropical wet and dry regions to shift in different ways. Here we show that despite uncertainty in the location of future rainfall shifts, climate models consistently project that large rainfall changes will occur for a considerable proportion of tropical land over the twenty-first century. The area of semi-arid land affected by large changes under a higher emissions scenario is likely to be greater than during even the most extreme regional wet or dry periods of the twentieth century, such as the Sahel drought of the late 1960s to 1990s. Substantial changes are projected to occur by mid-century--earlier than previously expected--and to intensify in line with global temperature rise. Therefore, current climate projections contain quantitative, decision-relevant information on future regional rainfall changes, particularly with regard to climate change mitigation policy.

  3. Modal Bin Hybrid Model: A surface area consistent, triple-moment sectional method for use in process-oriented modeling of atmospheric aerosols

    Science.gov (United States)

    Kajino, Mizuo; Easter, Richard C.; Ghan, Steven J.

    2013-09-01

    triple-moment sectional (TMS) aerosol dynamics model, Modal Bin Hybrid Model (MBHM), has been developed. In addition to number and mass (volume), surface area is predicted (and preserved), which is important for aerosol processes and properties such as gas-to-particle mass transfer, heterogeneous reaction, and light extinction cross section. The performance of MBHM was evaluated against double-moment sectional (DMS) models with coarse (BIN4) to very fine (BIN256) size resolutions for simulating evolution of particles under simultaneously occurring nucleation, condensation, and coagulation processes (BINx resolution uses x sections to cover the 1 nm to 1 µm size range). Because MBHM gives a physically consistent form of the intrasectional distributions, errors and biases of MBHM at BIN4-8 resolution were almost equivalent to those of DMS at BIN16-32 resolution for various important variables such as the moments Mk (k: 0, 2, 3), dMk/dt, and the number and volume of particles larger than a certain diameter. Another important feature of MBHM is that only a single bin is adequate to simulate full aerosol dynamics for particles whose size distribution can be approximated by a single lognormal mode. This flexibility is useful for process-oriented (multicategory and/or mixing state) modeling: Primary aerosols whose size parameters would not differ substantially in time and space can be expressed by a single or a small number of modes, whereas secondary aerosols whose size changes drastically from 1 to several hundred nanometers can be expressed by a number of modes. Added dimensions can be applied to MBHM to represent mixing state or photochemical age for aerosol mixing state studies.

  4. Internal Branding and Employee Brand Consistent Behaviours

    DEFF Research Database (Denmark)

    Mazzei, Alessandra; Ravazzani, Silvia

    2017-01-01

    constitutive processes. In particular, the paper places emphasis on the role and kinds of communication practices as a central part of the nonnormative and constitutive internal branding process. The paper also discusses an empirical study based on interviews with 32 Italian and American communication managers...... and 2 focus groups with Italian communication managers. Findings show that, in order to enhance employee brand consistent behaviours, the most effective communication practices are those characterised as enablement-oriented. Such a communication creates the organizational conditions adequate to sustain......Employee behaviours conveying brand values, named brand consistent behaviours, affect the overall brand evaluation. Internal branding literature highlights a knowledge gap in terms of communication practices intended to sustain such behaviours. This study contributes to the development of a non...

  5. Computational study of depth completion consistent with human bi-stable perception for ambiguous figures.

    Science.gov (United States)

    Mitsukura, Eiichi; Satoh, Shunji

    2018-03-01

    We propose a computational model that is consistent with human perception of depth in "ambiguous regions," in which no binocular disparity exists. Results obtained from our model reveal a new characteristic of depth perception. Random dot stereograms (RDS) are often used as examples because RDS provides sufficient disparity for depth calculation. A simple question confronts us: "How can we estimate the depth of a no-texture image region, such as one on white paper?" In such ambiguous regions, mathematical solutions related to binocular disparities are not unique or indefinite. We examine a mathematical description of depth completion that is consistent with human perception of depth for ambiguous regions. Using computer simulation, we demonstrate that resultant depth-maps qualitatively reproduce human depth perception of two kinds. The resultant depth maps produced using our model depend on the initial depth in the ambiguous region. Considering this dependence from psychological viewpoints, we conjecture that humans perceive completed surfaces that are affected by prior-stimuli corresponding to the initial condition of depth. We conducted psychological experiments to verify the model prediction. An ambiguous stimulus was presented after a prior stimulus removed ambiguity. The inter-stimulus interval (ISI) was inserted between the prior stimulus and post-stimulus. Results show that correlation of perception between the prior stimulus and post-stimulus depends on the ISI duration. Correlation is positive, negative, and nearly zero in the respective cases of short (0-200 ms), medium (200-400 ms), and long ISI (>400 ms). Furthermore, based on our model, we propose a computational model that can explain the dependence. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Comparison of the EIA, EETA and ETWA, received in the model GSM TIP, at the self-consistent approach and with use of the model MSIS-90

    Science.gov (United States)

    Klimenko, M. V.; Klimenko, V. V.; Bryukhanov, V. V.

    On the basis of the Global Self-consistent model of the thermosphere ionosphere and protonosphere GSM TIP developed in WD IZMIRAN the calculations for the quiet geomagnetic conditions of the equinox in the minimum of solar activity are carried out In the calculations the new block of the computation of electric fields in the ionosphere briefly described in COSPAR2006-A-00108 was used Two variants of calculations are executed with the account only the dynamo field generated by the thermosphere winds - completely self-consistent and with use of the model MSIS-90 for the calculation of the composition and temperature of the neutral atmosphere The results of the calculations are compared among themselves The global distributions of the foF2 the latitude behavior of the N e and T e on the near-midnight meridian at two height levels 233 and 626 km the latitude-altitude sections on the near-midnight meridian of the T e and time developments on UT of zonal component of the thermosphere wind and ion temperature at height sim 300 km and foF2 and h m F2 for three longitudinal chains of stations - Brazil Pacific and Indian in a vicinity of geomagnetic equator COSPAR2006-A-00109 calculated in two variants are submitted It is shown that at the self-consistent approach the maxima of the crests of the equatorial ionization anomaly EIA in the foF2 are shifted concerning calculated with the MSIS aside later evening hours The equatorial electron temperature anomaly EETA is formed in both variants of calculations However at the

  7. Harmonic oscillations, chaos and synchronization in systems consisting of Van der Pol oscillator coupled to a linear oscillator

    International Nuclear Information System (INIS)

    Woafo, P.

    1999-12-01

    This paper deals with the dynamics of a model describing systems consisting of the classical Van der Pol oscillator coupled gyroscopically to a linear oscillator. Both the forced and autonomous cases are considered. Harmonic response is investigated along with its stability boundaries. Condition for quenching phenomena in the autonomous case is derived. Neimark bifurcation is observed and it is found that our model shows period doubling and period-m sudden transitions to chaos. Synchronization of two and more systems in their chaotic regime is presented. (author)

  8. Structures, profile consistency, and transport scaling in electrostatic convection

    DEFF Research Database (Denmark)

    Bian, N.H.; Garcia, O.E.

    2005-01-01

    Two mechanisms at the origin of profile consistency in models of electrostatic turbulence in magnetized plasmas are considered. One involves turbulent diffusion in collisionless plasmas and the subsequent turbulent equipartition of Lagrangian invariants. By the very nature of its definition...

  9. A three-dimensional sharp interface model for self-consistent keyhole and weld pool dynamics in deep penetration laser welding

    International Nuclear Information System (INIS)

    Pang Shengyong; Chen Liliang; Zhou Jianxin; Yin Yajun; Chen Tao

    2011-01-01

    A three-dimensional sharp interface model is proposed to investigate the self-consistent keyhole and weld pool dynamics in deep penetration laser welding. The coupling of three-dimensional heat transfer, fluid flow and keyhole free surface evolutions in the welding process is simulated. It is theoretically confirmed that under certain low heat input welding conditions deep penetration laser welding with a collapsing free keyhole could be obtained and the flow directions near the keyhole wall are upwards and approximately parallel to the keyhole wall. However, significantly different weld pool dynamics in a welding process with an unstable keyhole are numerically found. Many flow patterns in the welding process with an unstable keyhole, verified by x-ray transmission experiments, were successfully simulated and analysed. Periodical keyhole collapsing and bubble formation processes are also successfully simulated and believed to be in good agreement with experiments. The mechanisms of keyhole instability are found to be closely associated with the behaviour of humps on the keyhole wall, and it is found that the welding speed and surface tension are closely related to the formation of humps on the keyhole wall. It is also shown that the weld pool dynamics in laser welding with an unstable keyhole are closely associated with the transient keyhole instability and therefore modelling keyhole and weld pool in a self-consistent way is significant to understand the physics of laser welding.

  10. Self-consistent RPA based on a many-body vacuum

    International Nuclear Information System (INIS)

    Jemaï, M.; Schuck, P.

    2011-01-01

    Self-Consistent RPA is extended in a way so that it is compatible with a variational ansatz for the ground-state wave function as a fermionic many-body vacuum. Employing the usual equation-of-motion technique, we arrive at extended RPA equations of the Self-Consistent RPA structure. In principle the Pauli principle is, therefore, fully respected. However, the correlation functions entering the RPA matrix can only be obtained from a systematic expansion in powers of some combinations of RPA amplitudes. We demonstrate for a model case that this expansion may converge rapidly.

  11. The consistency assessment of topological relations in cartographic generalization

    Science.gov (United States)

    Zheng, Chunyan; Guo, Qingsheng; Du, Xiaochu

    2006-10-01

    The field of research in the generalization assessment has been less studied than the generalization process itself, and it is very important to keep topological relation consistency for meeting generalization quality. This paper proposes a methodology to assess the quality of generalized map from topological relations consistency. Taking roads (including railway) and residential areas for examples, from the viewpoint of the spatial cognition, some issues about topological consistency in different map scales are analyzed. The statistic information about the inconsistent topological relations can be obtained by comparing the two matrices: one is the matrix for the topological relations in the generalized map; the other is the theoretical matrix for the topological relations that should be maintained after generalization. Based on the fuzzy set theory and the classification of map object types, the consistency evaluation model of topological relations is established. The paper proves the feasibility of the method through the example about how to evaluate the local topological relations between simple roads and residential area finally.

  12. Developmental changes in consistency of autobiographical memories: adolescents' and young adults' repeated recall of recent and distance events.

    Science.gov (United States)

    Larkina, Marina; Merrill, Natalie A; Bauer, Patricia J

    2017-09-01

    Autobiographical memories contribute continuity and stability to one's self yet they also are subject to change: they can be forgotten or be inconsistently remembered and reported. In the present research, we compared the consistency of two reports of recent and distant personal events in adolescents (12- to 14-year-olds) and young adults (18- to 23-year-olds). In line with expectations of greater mnemonic consistency among young adults relative to adolescents, adolescents reported the same events 80% of the time compared with 90% consistency among young adults; the significant difference disappeared after taking into consideration narrative characteristics of individual memories. Neither age group showed high levels of content consistency (30% vs. 36%); young adults were more consistent than adolescents even after controlling for other potential predictors of content consistency. Adolescents and young adults did not differ in consistency of estimating when their past experiences occurred. Multilevel modelling indicated that the level of thematic coherence of the initial memory report and ratings of event valence significantly predicted memory consistency at the level of the event. Thematic coherence was a significant negative predictor of content consistency. The findings suggest a developmental progression in the robustness and stability of personal memories between adolescence and young adulthood.

  13. Self-consistent relativistic Boltzmann-Uehling-Uhlenbeck equation for the Δ distribution function

    International Nuclear Information System (INIS)

    Mao, G.; Li, Z.; Zhuo, Y.

    1996-01-01

    We derive the self-consistent relativistic Boltzmann-Uehling-Uhlenbeck (RBUU) equation for the delta distribution function within the framework which we have done for nucleon close-quote s. In our approach, the Δ isobars are treated in essentially the same way as nucleons. Both mean field and collision terms of Δ close-quote s RBUU equation are derived from the same effective Lagrangian and presented analytically. We calculate the in-medium NΔ elastic and inelastic scattering cross sections up to twice nuclear matter density and the results show that the in-medium cross sections deviate substantially from Cugnon close-quote s parametrization that is commonly used in the transport model. copyright 1996 The American Physical Society

  14. The Bioenvironmental modeling of Bahar city based on Climate-consistent Architecture

    OpenAIRE

    Parna Kazemian

    2014-01-01

    The identification of the climate of a particularplace and the analysis of the climatic needs in terms of human comfort and theuse of construction materials is one of the prerequisites of aclimate-consistent design. In studies on climate and weather, usingillustrative reports, first a picture of the state of climate is offered. Then,based on the obtained results, the range of changes is determined, and thecause-effect relationships at different scales are identified. Finally, by ageneral exam...

  15. Thermodynamics of a Compressible Maier-Saupe Model Based on the Self-Consistent Field Theory of Wormlike Polymer

    Directory of Open Access Journals (Sweden)

    Ying Jiang

    2017-02-01

    Full Text Available This paper presents a theoretical formalism for describing systems of semiflexible polymers, which can have density variations due to finite compressibility and exhibit an isotropic-nematic transition. The molecular architecture of the semiflexible polymers is described by a continuum wormlike-chain model. The non-bonded interactions are described through a functional of two collective variables, the local density and local segmental orientation tensor. In particular, the functional depends quadratically on local density-variations and includes a Maier–Saupe-type term to deal with the orientational ordering. The specified density-dependence stems from a free energy expansion, where the free energy of an isotropic and homogeneous homopolymer melt at some fixed density serves as a reference state. Using this framework, a self-consistent field theory is developed, which produces a Helmholtz free energy that can be used for the calculation of the thermodynamics of the system. The thermodynamic properties are analysed as functions of the compressibility of the model, for values of the compressibility realizable in mesoscopic simulations with soft interactions and in actual polymeric materials.

  16. Consistency assessment of rating curve data in various locations using Bidirectional Reach (BReach)

    Science.gov (United States)

    Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Coxon, Gemma; Freer, Jim; Verhoest, Niko E. C.

    2017-10-01

    When estimating discharges through rating curves, temporal data consistency is a critical issue. In this research, consistency in stage-discharge data is investigated using a methodology called Bidirectional Reach (BReach), which departs from a (in operational hydrology) commonly used definition of consistency. A period is considered to be consistent if no consecutive and systematic deviations from a current situation occur that exceed observational uncertainty. Therefore, the capability of a rating curve model to describe a subset of the (chronologically sorted) data is assessed in each observation by indicating the outermost data points for which the rating curve model behaves satisfactorily. These points are called the maximum left or right reach, depending on the direction of the investigation. This temporal reach should not be confused with a spatial reach (indicating a part of a river). Changes in these reaches throughout the data series indicate possible changes in data consistency and if not resolved could introduce additional errors and biases. In this research, various measurement stations in the UK, New Zealand and Belgium are selected based on their significant historical ratings information and their specific characteristics related to data consistency. For each country, regional information is maximally used to estimate observational uncertainty. Based on this uncertainty, a BReach analysis is performed and, subsequently, results are validated against available knowledge about the history and behavior of the site. For all investigated cases, the methodology provides results that appear to be consistent with this knowledge of historical changes and thus facilitates a reliable assessment of (in)consistent periods in stage-discharge measurements. This assessment is not only useful for the analysis and determination of discharge time series, but also to enhance applications based on these data (e.g., by informing hydrological and hydraulic model

  17. Consistent lattice Boltzmann modeling of low-speed isothermal flows at finite Knudsen numbers in slip-flow regime: Application to plane boundaries.

    Science.gov (United States)

    Silva, Goncalo; Semiao, Viriato

    2017-07-01

    The first nonequilibrium effect experienced by gaseous flows in contact with solid surfaces is the slip-flow regime. While the classical hydrodynamic description holds valid in bulk, at boundaries the fluid-wall interactions must consider slip. In comparison to the standard no-slip Dirichlet condition, the case of slip formulates as a Robin-type condition for the fluid tangential velocity. This makes its numerical modeling a challenging task, particularly in complex geometries. In this work, this issue is handled with the lattice Boltzmann method (LBM), motivated by the similarities between the closure relations of the reflection-type boundary schemes equipping the LBM equation and the slip velocity condition established by slip-flow theory. Based on this analogy, we derive, as central result, the structure of the LBM boundary closure relation that is consistent with the second-order slip velocity condition, applicable to planar walls. Subsequently, three tasks are performed. First, we clarify the limitations of existing slip velocity LBM schemes, based on discrete analogs of kinetic theory fluid-wall interaction models. Second, we present improved slip velocity LBM boundary schemes, constructed directly at discrete level, by extending the multireflection framework to the slip-flow regime. Here, two classes of slip velocity LBM boundary schemes are considered: (i) linear slip schemes, which are local but retain some calibration requirements and/or operation limitations, (ii) parabolic slip schemes, which use a two-point implementation but guarantee the consistent prescription of the intended slip velocity condition, at arbitrary plane wall discretizations, further dispensing any numerical calibration procedure. Third and final, we verify the improvements of our proposed slip velocity LBM boundary schemes against existing ones. The numerical tests evaluate the ability of the slip schemes to exactly accommodate the steady Poiseuille channel flow solution, over

  18. Consistent lattice Boltzmann modeling of low-speed isothermal flows at finite Knudsen numbers in slip-flow regime: Application to plane boundaries

    Science.gov (United States)

    Silva, Goncalo; Semiao, Viriato

    2017-07-01

    The first nonequilibrium effect experienced by gaseous flows in contact with solid surfaces is the slip-flow regime. While the classical hydrodynamic description holds valid in bulk, at boundaries the fluid-wall interactions must consider slip. In comparison to the standard no-slip Dirichlet condition, the case of slip formulates as a Robin-type condition for the fluid tangential velocity. This makes its numerical modeling a challenging task, particularly in complex geometries. In this work, this issue is handled with the lattice Boltzmann method (LBM), motivated by the similarities between the closure relations of the reflection-type boundary schemes equipping the LBM equation and the slip velocity condition established by slip-flow theory. Based on this analogy, we derive, as central result, the structure of the LBM boundary closure relation that is consistent with the second-order slip velocity condition, applicable to planar walls. Subsequently, three tasks are performed. First, we clarify the limitations of existing slip velocity LBM schemes, based on discrete analogs of kinetic theory fluid-wall interaction models. Second, we present improved slip velocity LBM boundary schemes, constructed directly at discrete level, by extending the multireflection framework to the slip-flow regime. Here, two classes of slip velocity LBM boundary schemes are considered: (i) linear slip schemes, which are local but retain some calibration requirements and/or operation limitations, (ii) parabolic slip schemes, which use a two-point implementation but guarantee the consistent prescription of the intended slip velocity condition, at arbitrary plane wall discretizations, further dispensing any numerical calibration procedure. Third and final, we verify the improvements of our proposed slip velocity LBM boundary schemes against existing ones. The numerical tests evaluate the ability of the slip schemes to exactly accommodate the steady Poiseuille channel flow solution, over

  19. Measuring process and knowledge consistency

    DEFF Research Database (Denmark)

    Edwards, Kasper; Jensen, Klaes Ladeby; Haug, Anders

    2007-01-01

    When implementing configuration systems, knowledge about products and processes are documented and replicated in the configuration system. This practice assumes that products are specified consistently i.e. on the same rule base and likewise for processes. However, consistency cannot be taken...... for granted; rather the contrary, and attempting to implement a configuration system may easily ignite a political battle. This is because stakes are high in the sense that the rules and processes chosen may only reflect one part of the practice, ignoring a majority of the employees. To avoid this situation......, this paper presents a methodology for measuring product and process consistency prior to implementing a configuration system. The methodology consists of two parts: 1) measuring knowledge consistency and 2) measuring process consistency. Knowledge consistency is measured by developing a questionnaire...

  20. Linear augmented plane wave method for self-consistent calculations

    International Nuclear Information System (INIS)

    Takeda, T.; Kuebler, J.

    1979-01-01

    O.K. Andersen has recently introduced a linear augmented plane wave method (LAPW) for the calculation of electronic structure that was shown to be computationally fast. A more general formulation of an LAPW method is presented here. It makes use of a freely disposable number of eigenfunctions of the radial Schroedinger equation. These eigenfunctions can be selected in a self-consistent way. The present formulation also results in a computationally fast method. It is shown that Andersen's LAPW is obtained in a special limit from the present formulation. Self-consistent test calculations for copper show the present method to be remarkably accurate. As an application, scalar-relativistic self-consistent calculations are presented for the band structure of FCC lanthanum. (author)

  1. MultiSIMNRA: A computational tool for self-consistent ion beam analysis using SIMNRA

    International Nuclear Information System (INIS)

    Silva, T.F.; Rodrigues, C.L.; Mayer, M.; Moro, M.V.; Trindade, G.F.; Aguirre, F.R.; Added, N.; Rizzutto, M.A.; Tabacniks, M.H.

    2016-01-01

    Highlights: • MultiSIMNRA enables the self-consistent analysis of multiple ion beam techniques. • Self-consistent analysis enables unequivocal and reliable modeling of the sample. • Four different computational algorithms available for model optimizations. • Definition of constraints enables to include prior knowledge into the analysis. - Abstract: SIMNRA is widely adopted by the scientific community of ion beam analysis for the simulation and interpretation of nuclear scattering techniques for material characterization. Taking advantage of its recognized reliability and quality of the simulations, we developed a computer program that uses multiple parallel sessions of SIMNRA to perform self-consistent analysis of data obtained by different ion beam techniques or in different experimental conditions of a given sample. In this paper, we present a result using MultiSIMNRA for a self-consistent multi-elemental analysis of a thin film produced by magnetron sputtering. The results demonstrate the potentialities of the self-consistent analysis and its feasibility using MultiSIMNRA.

  2. Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling

    Energy Technology Data Exchange (ETDEWEB)

    Pera, H.; Kleijn, J. M.; Leermakers, F. A. M., E-mail: Frans.leermakers@wur.nl [Laboratory of Physical Chemistry and Colloid Science, Wageningen University, Dreijenplein 6, 6307 HB Wageningen (Netherlands)

    2014-02-14

    To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k{sub c} and k{sup ¯} and the preferred monolayer curvature J{sub 0}{sup m}, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k{sub c} and the area compression modulus k{sub A} are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k{sup ¯} and J{sub 0}{sup m} can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k{sup ¯} and J{sub 0}{sup m} change sign with relevant parameter changes. Although typically k{sup ¯}<0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J{sub 0}{sup m}≫0, especially at low ionic

  3. Microscopic theory of the superconducting gap in the quasi-one-dimensional organic conductor (TMTSF) 2ClO4 : Model derivation and two-particle self-consistent analysis

    Science.gov (United States)

    Aizawa, Hirohito; Kuroki, Kazuhiko

    2018-03-01

    We present a first-principles band calculation for the quasi-one-dimensional (Q1D) organic superconductor (TMTSF) 2ClO4 . An effective tight-binding model with the TMTSF molecule to be regarded as the site is derived from a calculation based on maximally localized Wannier orbitals. We apply a two-particle self-consistent (TPSC) analysis by using a four-site Hubbard model, which is composed of the tight-binding model and an onsite (intramolecular) repulsive interaction, which serves as a variable parameter. We assume that the pairing mechanism is mediated by the spin fluctuation, and the sign of the superconducting gap changes between the inner and outer Fermi surfaces, which correspond to a d -wave gap function in a simplified Q1D model. With the parameters we adopt, the critical temperature for superconductivity estimated by the TPSC approach is approximately 1 K, which is consistent with experiment.

  4. A self-consistent model of rich clusters of galaxies. I. The galactic component of a cluster

    International Nuclear Information System (INIS)

    Konyukov, M.V.

    1985-01-01

    It is shown that to obtain the distribution function for the galactic component of a cluster reduces in the last analysis to solving the boundary-value problem for the gravitational potential of a self-consistent field. The distribution function is determined by two main parameters. An algorithm is constructed for the solution of the problem, and a program is set up to solve it. It is used to establish the region of values of the parameters in the problem for which solutions exist. The scheme proposed is extended to the case where there exists in the cluster a separate central body with a known density distribution (for example, a cD galaxy). A method is indicated for the estimation of the parameters of the model from the results of observations of clusters of galaxies in the optical range

  5. Self-consistent equilibria in cylindrical reversed-field pinch

    International Nuclear Information System (INIS)

    Lo Surdo, C.; Paccagnella, R.; Guo, S.

    1995-03-01

    The object of this work is to study the self-consistent magnetofluidstatic equilibria of a 2-region (plasma + gas) reversed-field pinch (RFP) in cylindrical approximation (namely, with vanishing inverse aspect ratio). Differently from what happens in a tokamak, in a RFP a significant part of the plasma current is driven by a dynamo electric field (DEF), in its turn mainly due to plasma turbulence. So, it is worked out a reasonable mathematical model of the above self-consistent equilibria under the following main points it has been: a) to the lowest order, and according to a standard ansatz, the turbulent DEF say ε t , is expressed as a homogeneous transform of the magnetic field B of degree 1, ε t =(α) (B), with α≡a given 2-nd rank tensor, homogeneous of degree 0 in B and generally depending on the plasma state; b) ε t does not explicitly appear in the plasma energy balance, as it were produced by a Maxwell demon able of extract the corresponding Joule power from the plasma. In particular, it is showed that, if both α and the resistivity tensor η are isotropic and constant, the magnetic field is force-free with abnormality equal to αη 0 /η, in the limit of vanishing β; that is, the well-known J.B. Taylor'result is recovered, in this particular conditions, starting from ideas quite different from the usual ones (minimization of total magnetic energy under constrained total elicity). Finally, the general problem is solved numerically under circular (besides cylindrical) symmetry, for simplicity neglecting the existence of gas region (i.e., assuming the plasma in direct contact with the external wall)

  6. Thermodynamically consistent description of criticality in models of correlated electrons

    Czech Academy of Sciences Publication Activity Database

    Janiš, Václav; Kauch, Anna; Pokorný, Vladislav

    2017-01-01

    Roč. 95, č. 4 (2017), s. 1-14, č. článku 045108. ISSN 2469-9950 R&D Projects: GA ČR GA15-14259S Institutional support: RVO:68378271 Keywords : conserving approximations * Anderson model * Hubbard model * parquet equations Subject RIV: BM - Solid Matter Physics ; Magnetism OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.) Impact factor: 3.836, year: 2016

  7. Self-consistent treatment of nuclear collective motion with an application to the giant-dipole resonance

    International Nuclear Information System (INIS)

    Liran, S.; Technion-Israel Inst. of Tech., Haifa. Dept. of Physics)

    1977-01-01

    This paper extends the recent theory of Liran, Scheefer, Scheid and Greiner on non-adiabatic cranking and nuclear collective motion. In the present work we show the self-consistency conditions for the collective motion, which are indicated by appropriate time-dependent Lagrange multipliers, can be treated explicitly. The energy conservation and the self-consistency condition in the case of one collective degree of freedom are expressed in differential form. This leads to a set of coupled differential equations in time for the many-body wave function, for the collective variable and for the Lagrange multiplier. An iteration procedure similar to that of the previous work is also presented. As an illustrative example, we investigate Brink's single-particle description of the giant-dipole resonance. In this case, the important role played by non-adiabaticity and self-consistency in determining the collective motion is demonstrated and discussed. We also consider the fact that in this example of a fast collective motion, the adiabatic cranking model of Inglis gives the correct mass parameter. (orig.) [de

  8. Thermodynamic consistency of viscoplastic material models involving external variable rates in the evolution equations for the internal variables

    International Nuclear Information System (INIS)

    Malmberg, T.

    1993-09-01

    The objective of this study is to derive and investigate thermodynamic restrictions for a particular class of internal variable models. Their evolution equations consist of two contributions: the usual irreversible part, depending only on the present state, and a reversible but path dependent part, linear in the rates of the external variables (evolution equations of ''mixed type''). In the first instance the thermodynamic analysis is based on the classical Clausius-Duhem entropy inequality and the Coleman-Noll argument. The analysis is restricted to infinitesimal strains and rotations. The results are specialized and transferred to a general class of elastic-viscoplastic material models. Subsequently, they are applied to several viscoplastic models of ''mixed type'', proposed or discussed in the literature (Robinson et al., Krempl et al., Freed et al.), and it is shown that some of these models are thermodynamically inconsistent. The study is closed with the evaluation of the extended Clausius-Duhem entropy inequality (concept of Mueller) where the entropy flux is governed by an assumed constitutive equation in its own right; also the constraining balance equations are explicitly accounted for by the method of Lagrange multipliers (Liu's approach). This analysis is done for a viscoplastic material model with evolution equations of the ''mixed type''. It is shown that this approach is much more involved than the evaluation of the classical Clausius-Duhem entropy inequality with the Coleman-Noll argument. (orig.) [de

  9. Self-Consistant Numerical Modeling of E-Cloud Driven Instability of a Bunch Train in the CERN SPS

    International Nuclear Information System (INIS)

    Vay, J.-L.; Furman, M.A.; Secondo, R.; Venturini, M.; Fox, J.D.; Rivetta, C.H.

    2010-01-01

    The simulation package WARP-POSINST was recently upgraded for handling multiple bunches and modeling concurrently the electron cloud buildup and its effect on the beam, allowing for direct self-consistent simulation of bunch trains generating, and interacting with, electron clouds. We have used the WARP-POSINST package on massively parallel supercomputers to study the growth rate and frequency patterns in space-time of the electron cloud driven transverse instability for a proton bunch train in the CERN SPS accelerator. Results suggest that a positive feedback mechanism exists between the electron buildup and the e-cloud driven transverse instability, leading to a net increase in predicted electron density. Comparisons to selected experimental data are also given. Electron clouds have been shown to trigger fast growing instabilities on proton beams circulating in the SPS and other accelerators. So far, simulations of electron cloud buildup and their effects on beam dynamics have been performed separately. This is a consequence of the large computational cost of the combined calculation due to large space and time scale disparities between the two processes. We have presented the latest improvements of the simulation package WARP-POSINST for the simulation of self-consistent ecloud effects, including mesh refinement, and generation of electrons from gas ionization and impact at the pipe walls. We also presented simulations of two consecutive bunches interacting with electrons clouds in the SPS, which included generation of secondary electrons. The distribution of electrons in front of the first beam was initialized from a dump taken from a preceding buildup calculation using the POSINST code. In this paper, we present an extension of this work where one full batch of 72 bunches is simulated in the SPS, including the entire buildup calculation and the self-consistent interaction between the bunches and the electrons. Comparisons to experimental data are also given.

  10. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

    Science.gov (United States)

    Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

    2011-01-01

    Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

  11. Dark energy scenario consistent with GW170817 in theories beyond Horndeski gravity

    Science.gov (United States)

    Kase, Ryotaro; Tsujikawa, Shinji

    2018-05-01

    The Gleyzes-Langlois-Piazza-Vernizzi (GLPV) theories up to quartic order are the general scheme of scalar-tensor theories allowing the possibility for realizing the tensor propagation speed ct equivalent to 1 on the isotropic cosmological background. We propose a dark energy model in which the late-time cosmic acceleration occurs by a simple k-essence Lagrangian analogous to the ghost condensate with cubic and quartic Galileons in the framework of GLPV theories. We show that a wide variety of the variation of the dark energy equation of state wDE including the entry to the region wDEequation of state wDE=-2 during the matter era, which is disfavored by observational data, can be avoided by the existence of a quadratic k-essence Lagrangian X2. We study the evolution of nonrelativistic matter perturbations for the model ct2=1 and show that the two quantities μ and Σ , which are related to the Newtonian and weak lensing gravitational potentials respectively, are practically equivalent to each other, such that μ ≃Σ >1 . For the case in which the deviation of wDE from -1 is significant at a later cosmological epoch, the values of μ and Σ tend to be larger at low redshifts. We also find that our dark energy model can be consistent with the bounds on the deviation parameter αH from Horndeski theories arising from the modification of gravitational law inside massive objects.

  12. Microarray profiling shows distinct differences between primary tumors and commonly used preclinical models in hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Wang, Weining; Iyer, N. Gopalakrishna; Tay, Hsien Ts’ung; Wu, Yonghui; Lim, Tony K. H.; Zheng, Lin; Song, In Chin; Kwoh, Chee Keong; Huynh, Hung; Tan, Patrick O. B.; Chow, Pierce K. H.

    2015-01-01

    Despite advances in therapeutics, outcomes for hepatocellular carcinoma (HCC) remain poor and there is an urgent need for efficacious systemic therapy. Unfortunately, drugs that are successful in preclinical studies often fail in the clinical setting, and we hypothesize that this is due to functional differences between primary tumors and commonly used preclinical models. In this study, we attempt to answer this question by comparing tumor morphology and gene expression profiles between primary tumors, xenografts and HCC cell lines. Hep G2 cell lines and tumor cells from patient tumor explants were subcutaneously (ectopically) injected into the flank and orthotopically into liver parenchyma of Mus Musculus SCID mice. The mice were euthanized after two weeks. RNA was extracted from the tumors, and gene expression profiling was performed using the Gene Chip Human Genome U133 Plus 2.0. Principal component analyses (PCA) and construction of dendrograms were conducted using Partek genomics suite. PCA showed that the commonly used HepG2 cell line model and its xenograft counterparts were vastly different from all fresh primary tumors. Expression profiles of primary tumors were also significantly divergent from their counterpart patient-derived xenograft (PDX) models, regardless of the site of implantation. Xenografts from the same primary tumors were more likely to cluster together regardless of site of implantation, although heat maps showed distinct differences in gene expression profiles between orthotopic and ectopic models. The data presented here challenges the utility of routinely used preclinical models. Models using HepG2 were vastly different from primary tumors and PDXs, suggesting that this is not clinically representative. Surprisingly, site of implantation (orthotopic versus ectopic) resulted in limited impact on gene expression profiles, and in both scenarios xenografts differed significantly from the original primary tumors, challenging the long

  13. CPR in medical TV shows: non-health care student perspective.

    Science.gov (United States)

    Alismail, Abdullah; Meyer, Nicole C; Almutairi, Waleed; Daher, Noha S

    2018-01-01

    There are over a dozen medical shows airing on television, many of which are during prime time. Researchers have recently become more interested in the role of these shows, and the awareness on cardiopulmonary resuscitation. Several cases have been reported where a lay person resuscitated a family member using medical TV shows as a reference. The purpose of this study is to examine and evaluate college students' perception on cardiopulmonary resuscitation and when to shock using an automated external defibrillator based on their experience of watching medical TV shows. A total of 170 students (nonmedical major) were surveyed in four different colleges in the United States. The survey consisted of questions that reflect their perception and knowledge acquired from watching medical TV shows. A stepwise regression was used to determine the significant predictors of "How often do you watch medical drama TV shows" in addition to chi-square analysis for nominal variables. Regression model showed significant effect that TV shows did change students' perception positively ( p <0.001), and they would select shock on asystole as the frequency of watching increases ( p =0.023). The findings of this study show that high percentage of nonmedical college students are influenced significantly by medical shows. One particular influence is the false belief about when a shock using the automated external defibrillator (AED) is appropriate as it is portrayed falsely in most medical shows. This finding raises a concern about how these shows portray basic life support, especially when not following American Heart Association (AHA) guidelines. We recommend the medical advisors in these shows to use AHA guidelines and AHA to expand its expenditures to include medical shows to educate the public on the appropriate action to rescue an out-of-hospital cardiac arrest patient.

  14. Existence and uniqueness of consistent conjectural variation equilibrium in electricity markets

    International Nuclear Information System (INIS)

    Liu, Youfei; Cai, Bin; Ni, Y.X.; Wu, Felix F.

    2007-01-01

    The game-theory based methods are widely applied to analyze the market equilibrium and to study the strategic behavior in the oligopolistic electricity markets. Recently, the conjecture variation approach, one of well-studied methods in game theory, is reported to model the strategic behavior in deregulated electricity markets. Unfortunately, the conjecture variation models have been criticized for the drawback of logical inconsistence and possibility of abundant equilibria. Aiming for this, this paper investigates the existence and uniqueness of consistent conjectural variation equilibrium in electricity markets. With several good characteristics of the electricity market and with an infinite horizon optimization model, it is shown that the consistent conjecture variation will satisfy a set of coupled nonlinear equations and there will be only one equilibrium. This result can provide the fundamentals for further applications of the conjecture variation approach. (author)

  15. Attutude-action consistency and social policy related to nuclear technology

    International Nuclear Information System (INIS)

    Lindell, M.K.; Perry, R.W.; Greene, M.

    1980-06-01

    This study reports the results of a further analysis of questionnaire data--parts of which have been previously reported by Lindell, Earle, Hebert and Perry (1978)--that are related to the issue of consistency of attitudes and behavior toward nuclear power and nuclear waste management. Three factors are considered that might be expected to have a significant bearing on attitude-action consistency: social support, attitude object importance and past activism. Analysis of the data indicated that pronuclear respondents were more likely to show consistency of attitudes and actions (66%) than were antinuclear respondents (51%) although the difference in proportions is not statistically significant. Further analyses showed a strong positive relation between attitude-action consistency and perceived social support, measured by the degree to which the respondent believed that close friends and work associated agreed with his attitude. This relationship held up even when controls for attitude object importance and past activism were introduced. Attitude object importance--the salience of the issue of energy shortage--had a statistically significant effect only when perceived social support was low. Past activism had no significant relation to attitude-action consistency. These data suggest that the level of active support for or opposition to nuclear technology will be affected by the distribution of favorable and unfavorable attitudes among residents of an area. Situations in which pro- and antinuclear attitudes are concentrated among members of interacting groups, rather than distributed randomly, are more likely to produce high levels of polarization

  16. Consistency with synchrotron emission in the bright GRB 160625B observed by Fermi

    Science.gov (United States)

    Ravasio, M. E.; Oganesyan, G.; Ghirlanda, G.; Nava, L.; Ghisellini, G.; Pescalli, A.; Celotti, A.

    2018-05-01

    We present time-resolved spectral analysis of prompt emission from GRB 160625B, one of the brightest bursts ever detected by Fermi in its nine years of operations. Standard empirical functions fail to provide an acceptable fit to the GBM spectral data, which instead require the addition of a low-energy break to the fitting function. We introduce a new fitting function, called 2SBPL, consisting of three smoothly connected power laws. Fitting this model to the data, the goodness of the fits significantly improves and the spectral parameters are well constrained. We also test a spectral model that combines non-thermal and thermal (black body) components, but find that the 2SBPL model is systematically favoured. The spectral evolution shows that the spectral break is located around Ebreak 100 keV, while the usual νFν peak energy feature Epeak evolves in the 0.5-6 MeV energy range. The slopes below and above Ebreak are consistent with the values -0.67 and -1.5, respectively, expected from synchrotron emission produced by a relativistic electron population with a low-energy cut-off. If Ebreak is interpreted as the synchrotron cooling frequency, the implied magnetic field in the emitting region is 10 Gauss, i.e. orders of magnitudes smaller than the value expected for a dissipation region located at 1013-14 cm from the central engine. The low ratio between Epeak and Ebreak implies that the radiative cooling is incomplete, contrary to what is expected in strongly magnetized and compact emitting regions.

  17. Phase models of galaxies consisting of a disk and halo

    International Nuclear Information System (INIS)

    Osipkov, L.P.; Kutuzov, S.A.

    1988-01-01

    A method is developed for finding the phase density of a two-component model of a distribution of masses. The equipotential surfaces and potential law are given. The equipotentials are lenslike surfaces with a sharp edge in the equatorial plane, this ensuring the existence of a vanishingly thin embedded disk. The equidensity surfaces of the halo coincide with the equipotentials. Phase models are constructed separately for the halo and for the disk on the basis of the spatial and surface mass densities by the solution of the corresponding integral equations. In particular, models with a halo having finite dimensions can be constructed. For both components, the part of the phase density even with respect to the velocities is found. For the halo, it depends only on the energy integral. Two examples, for which exact solutions are found, are considered

  18. Multiscale Modeling at Nanointerfaces: Polymer Thin Film Materials Discovery via Thermomechanically Consistent Coarse Graining

    Science.gov (United States)

    Hsu, David D.

    Due to high nanointerfacial area to volume ratio, the properties of "nanoconfined" polymer thin films, blends, and composites become highly altered compared to their bulk homopolymer analogues. Understanding the structure-property mechanisms underlying this effect is an active area of research. However, despite extensive work, a fundamental framework for predicting the local and system-averaged thermomechanical properties as a function of configuration and polymer species has yet to be established. Towards bridging this gap, here, we present a novel, systematic coarse-graining (CG) method which is able to capture quantitatively, the thermomechanical properties of real polymer systems in bulk and in nanoconfined geometries. This method, which we call thermomechanically consistent coarse-graining (TCCG), is a two-bead-per-monomer CG hybrid approach through which bonded interactions are optimized to match the atomistic structure via the Iterative Boltzmann Inversion method (IBI), and nonbonded interactions are tuned to macroscopic targets through parametric studies. We validate the TCCG method by systematically developing coarse-grain models for a group of five specialized methacrylate-based polymers including poly(methyl methacrylate) (PMMA). Good correlation with bulk all-atom (AA) simulations and experiments is found for the temperature-dependent glass transition temperature (Tg) Flory-Fox scaling relationships, self-diffusion coefficients of liquid monomers, and modulus of elasticity. We apply this TCCG method also to bulk polystyrene (PS) using a comparable coarse-grain CG bead mapping strategy. The model demonstrates chain stiffness commensurate with experiments, and we utilize a density-correction term to improve the transferability of the elastic modulus over a 500 K range. Additionally, PS and PMMA models capture the unexplained, characteristically dissimilar scaling of Tg with the thickness of free-standing films as seen in experiments. Using vibrational

  19. An NDVI-Based Vegetation Phenology Is Improved to be More Consistent with Photosynthesis Dynamics through Applying a Light Use Efficiency Model over Boreal High-Latitude Forests

    Directory of Open Access Journals (Sweden)

    Siheng Wang

    2017-07-01

    Full Text Available Remote sensing of high-latitude forests phenology is essential for understanding the global carbon cycle and the response of vegetation to climate change. The normalized difference vegetation index (NDVI has long been used to study boreal evergreen needleleaf forests (ENF and deciduous broadleaf forests. However, the NDVI-based growing season is generally reported to be longer than that based on gross primary production (GPP, which can be attributed to the difference between greenness and photosynthesis. Instead of introducing environmental factors such as land surface or air temperature like previous studies, this study attempts to make VI-based phenology more consistent with photosynthesis dynamics through applying a light use efficiency model. NDVI (MOD13C2 was used as a proxy for both fractional of absorbed photosynthetically active radiation (APAR and light use efficiency at seasonal time scale. Results show that VI-based phenology is improved towards tracking seasonal GPP changes more precisely after applying the light use efficiency model compared to raw NDVI or APAR, especially over ENF.

  20. 'Show me the money': energy projects financing

    International Nuclear Information System (INIS)

    Ball, C.

    2006-01-01

    This paper describes the business and business model of Corpfinance International (CFI). CFI consists of three businesses: structured financing, private equity/corporate finance advisory and securitization. Furthermore, CFI is the lender of record acting on behalf of and based on strong relationship with various Life Insurance Companies, Pension Funds and International Banks. CFI has in-house expertise in support of its lending advisory and investing activities

  1. Natural inflation: consistency with cosmic microwave background observations of Planck and BICEP2

    International Nuclear Information System (INIS)

    Freese, Katherine; Kinney, William H.

    2015-01-01

    Natural inflation is a good fit to all cosmic microwave background (CMB) data and may be the correct description of an early inflationary expansion of the Universe. The large angular scale CMB polarization experiment BICEP2 has announced a major discovery, which can be explained as the gravitational wave signature of inflation, at a level that matches predictions by natural inflation models. The natural inflation (NI) potential is theoretically exceptionally well motivated in that it is naturally flat due to shift symmetries, and in the simplest version takes the form V(φ) = Λ 4  [1 ± cos(Nφ/f)]. A tensor-to-scalar ratio r > 0.1 as seen by BICEP2 requires the height of any inflationary potential to be comparable to the scale of grand unification and the width to be comparable to the Planck scale. The Cosine Natural Inflation model agrees with all cosmic microwave background measurements as long as f ≥ m Pl (where m Pl  = 1.22 × 10 19  GeV) and Λ ∼ m GUT  ∼ 10 16  GeV. This paper also discusses other variants of the natural inflation scenario: we show that axion monodromy with potential V∝ φ 2/3 is inconsistent with the BICEP2 limits at the 95% confidence level, and low-scale inflation is strongly ruled out. Linear potentials V ∝ φ are inconsistent with the BICEP2 limit at the 95% confidence level, but are marginally consistent with a joint Planck/BICEP2 limit at 95%. We discuss the pseudo-Nambu Goldstone model proposed by Kinney and Mahanthappa as a concrete realization of low-scale inflation. While the low-scale limit of the model is inconsistent with the data, the large-field limit of the model is marginally consistent with BICEP2. All of the models considered predict negligible running of the scalar spectral index, and would be ruled out by a detection of running

  2. High consistency cellulase treatment of hardwood prehydrolysis kraft based dissolving pulp.

    Science.gov (United States)

    Wang, Qiang; Liu, Shanshan; Yang, Guihua; Chen, Jiachuan; Ni, Yonghao

    2015-01-01

    For enzymatic treatment of dissolving pulp, there is a need to improve the process to facilitate its commercialization. For this purpose, the high consistency cellulase treatment was conducted based on the hypothesis that a high cellulose concentration would favor the interactions of cellulase and cellulose, thus improves the cellulase efficiency while decreasing the water usage. The results showed that compared with a low consistency of 3%, the high consistency of 20% led to 24% increases of cellulase adsorption ratio. As a result, the viscosity decrease and Fock reactivity increase at consistency of 20% were enhanced from 510 mL/g and 70.3% to 471 mL/g and 77.6%, respectively, compared with low consistency of 3% at 24h. The results on other properties such as alpha cellulose, alkali solubility and molecular weight distribution also supported the conclusion that a high consistency of cellulase treatment was more effective than a low pulp consistency process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Expressive body movement responses to music are coherent, consistent, and low dimensional.

    Science.gov (United States)

    Amelynck, Denis; Maes, Pieter-Jan; Martens, Jean Pierre; Leman, Marc

    2014-12-01

    Embodied music cognition stresses the role of the human body as mediator for the encoding and decoding of musical expression. In this paper, we set up a low dimensional functional model that accounts for 70% of the variability in the expressive body movement responses to music. With the functional principal component analysis, we modeled individual body movements as a linear combination of a group average and a number of eigenfunctions. The group average and the eigenfunctions are common to all subjects and make up what we call the commonalities. An individual performance is then characterized by a set of scores (the individualities), one score per eigenfunction. The model is based on experimental data which finds high levels of coherence/consistency between participants when grouped according to musical education. This shows an ontogenetic effect. Participants without formal musical education focus on the torso for the expression of basic musical structure (tempo). Musically trained participants decode additional structural elements in the music and focus on body parts having more degrees of freedom (such as the hands). Our results confirm earlier studies that different body parts move differently along with the music.

  4. Spatial occupancy models applied to atlas data show Southern Ground Hornbills strongly depend on protected areas.

    Science.gov (United States)

    Broms, Kristin M; Johnson, Devin S; Altwegg, Res; Conquest, Loveday L

    2014-03-01

    Determining the range of a species and exploring species--habitat associations are central questions in ecology and can be answered by analyzing presence--absence data. Often, both the sampling of sites and the desired area of inference involve neighboring sites; thus, positive spatial autocorrelation between these sites is expected. Using survey data for the Southern Ground Hornbill (Bucorvus leadbeateri) from the Southern African Bird Atlas Project, we compared advantages and disadvantages of three increasingly complex models for species occupancy: an occupancy model that accounted for nondetection but assumed all sites were independent, and two spatial occupancy models that accounted for both nondetection and spatial autocorrelation. We modeled the spatial autocorrelation with an intrinsic conditional autoregressive (ICAR) model and with a restricted spatial regression (RSR) model. Both spatial models can readily be applied to any other gridded, presence--absence data set using a newly introduced R package. The RSR model provided the best inference and was able to capture small-scale variation that the other models did not. It showed that ground hornbills are strongly dependent on protected areas in the north of their South African range, but less so further south. The ICAR models did not capture any spatial autocorrelation in the data, and they took an order, of magnitude longer than the RSR models to run. Thus, the RSR occupancy model appears to be an attractive choice for modeling occurrences at large spatial domains, while accounting for imperfect detection and spatial autocorrelation.

  5. Strong time-consistency in the cartel-versus-fringe model

    NARCIS (Netherlands)

    Groot, F.; Withagen, C.A.A.M.; Zeeuw, de A.J.

    2003-01-01

    Due to developments on the oil market in the 1970s, the theory of exhaustible resources was extended with the cartel-versus-fringe model to characterize markets with one big coherent cartel and a large number of small suppliers called the fringe. Because cartel and fringe are leader and follower,

  6. Cloud Standardization: Consistent Business Processes and Information

    Directory of Open Access Journals (Sweden)

    Razvan Daniel ZOTA

    2013-01-01

    Full Text Available Cloud computing represents one of the latest emerging trends in distributed computing that enables the existence of hardware infrastructure and software applications as services. The present paper offers a general approach to the cloud computing standardization as a mean of improving the speed of adoption for the cloud technologies. Moreover, this study tries to show out how organizations may achieve more consistent business processes while operating with cloud computing technologies.

  7. Flood damage curves for consistent global risk assessments

    Science.gov (United States)

    de Moel, Hans; Huizinga, Jan; Szewczyk, Wojtek

    2016-04-01

    Assessing potential damage of flood events is an important component in flood risk management. Determining direct flood damage is commonly done using depth-damage curves, which denote the flood damage that would occur at specific water depths per asset or land-use class. Many countries around the world have developed flood damage models using such curves which are based on analysis of past flood events and/or on expert judgement. However, such damage curves are not available for all regions, which hampers damage assessments in those regions. Moreover, due to different methodologies employed for various damage models in different countries, damage assessments cannot be directly compared with each other, obstructing also supra-national flood damage assessments. To address these problems, a globally consistent dataset of depth-damage curves has been developed. This dataset contains damage curves depicting percent of damage as a function of water depth as well as maximum damage values for a variety of assets and land use classes (i.e. residential, commercial, agriculture). Based on an extensive literature survey concave damage curves have been developed for each continent, while differentiation in flood damage between countries is established by determining maximum damage values at the country scale. These maximum damage values are based on construction cost surveys from multinational construction companies, which provide a coherent set of detailed building cost data across dozens of countries. A consistent set of maximum flood damage values for all countries was computed using statistical regressions with socio-economic World Development Indicators from the World Bank. Further, based on insights from the literature survey, guidance is also given on how the damage curves and maximum damage values can be adjusted for specific local circumstances, such as urban vs. rural locations, use of specific building material, etc. This dataset can be used for consistent supra

  8. A self-consistent model for low-high transitions in tokamaks

    International Nuclear Information System (INIS)

    Guzdar, P.N.; Hassam, A.B.

    1996-01-01

    A system of equations that couples the rapidly varying fluctuations of resistive ballooning modes to the slowly varying transport of the density, vorticity and parallel momentum have been derived and solved numerically. Only a single toroidal mode number is retained in the present work. The low-mode (L-mode) phase consists of strong poloidally asymmetric particle transport driven by resistive ballooning modes, with larger flux on the outboard side compared to the inboard side. With the onset of shear flow driven by a combination of toroidal drive mechanisms as well as the Reynolds stress, the fluctuations associated with the resistive ballooning modes are attenuated leading to a strong reduction in the particle transport. The drop in the particle transport results in steepening of the density profile leading to the high-mode (H-mode). copyright 1996 American Institute of Physics

  9. Is LambdaCDM consistent with the Tully-Fisher relation?

    Science.gov (United States)

    Reyes, Reinabelle; Gunn, J. E.; Mandelbaum, R.

    2013-07-01

    We consider the question of the origin of the Tully-Fisher relation in LambdaCDM cosmology. Reproducing the observed tight relation between stellar masses and rotation velocities of disk galaxies presents a challenge for semi-analytical models and hydrodynamic simulations of galaxy formation. Here, our goal is to construct a suite of galaxy mass models that is fully consistent with observations, and that also reproduces the observed Tully-Fisher relation. We take advantage of a well-defined sample of disk galaxies in SDSS with measured rotation velocities (from long-slit spectroscopy of H-alpha), stellar bulge and disk profiles (from fits to SDSS images), and average dark matter halo masses (from stacked weak lensing of a larger, similarly-selected sample). The primary remaining freedom in the mass models come from the final dark matter halo profile (after contraction from baryon infall and, possibly, feedback) and the stellar IMF. We find that the observed velocities are reproduced by models with Kroupa IMF and NFW (i.e., unmodified) dark matter haloes for galaxies with stellar masses 10^9-10^10 M_sun. For higher stellar masses, models with contracted NFW haloes are favored. A scenario in which the amount of halo contraction varies with stellar mass is able to reproduce the observed Tully-Fisher relation over the full stellar mass range of our sample from 10^9 to 10^11 M_sun. We present this as a proof-of-concept for consistency between LambdaCDM and the Tully-Fisher relation.

  10. Towards transparent and consistent exchange of knowledge for improved microbiological food safety

    DEFF Research Database (Denmark)

    Plaza-Rodrigues, Carolina; Ungaretti Haberbeck, Leticia; Desvignes, Virginie

    2017-01-01

    software tools and consistent rules for knowledge annotation. The knowledge repository would be a user friendly tool to benefit different users within the microbiological food safety community, especially users like risk assessors and managers, model developers and research scientists working......Predictive microbial modelling and quantitative microbiological risk assessment, two important and complementary areas within the food safety community, are generating a variety of scientific knowledge (experimental data and mathematical models) and resources (databases and software tools......) for the exploitation of this knowledge. However, the application and reusability of this knowledge is still hampered as the access to this knowledge and the exchange of information between databases and software tools are currently difficult and time consuming. To facilitate transparent and consistent knowledge access...

  11. MANIFESTATION OF MANIPULATION IN POLITICAL TALK-SHOWS: COGNITIVE AND MULTIMODAL ASPECTS

    Directory of Open Access Journals (Sweden)

    Petrova Anna Aleksandrovna

    2014-11-01

    Full Text Available The article deals with the problems of the manipulation manifestation in political television talk-shows. The suggestive processes of interaction in the analyzed genre of the media political discourse are studied in two aspects: а monomodal – as speech manipulation by verbal means at the level of emotional suggestion; b multimodal – as counter-suggestion, that restricts the effect of suggestion with visual and kinetic resources. The foundation of the cognitive analysis is a modeling method with a linguistic model which contains components of the cognitive and emotional processing of meaning, conclusions and reasoning. According to this three-component model, the speech manipulation consists in activation of dominant scripts of an addressee and is assured by the verbal resources of suggestion which associate with these scripts. The foundation of the multimodal research of the situations with counter-suggestion in the mass-media discourse is an ethnomethodological method with a reconstruction device. With this scientific attitude the authors have divided the resources of protection from the activating manipulation into two groups: 1 passive interactive communication of a suggestee in a verbal pause 2 active interactive communication of a suggestee aimed at changing the status and role domination. The empiric study of two isolated modalities and their correlations in specific situations of political talk shows allowed to develop the hypothesis on the existence of the fourth visual and kinetic component which represents space and corporal constellations with other models (or modalities of communication and their configurations. This study emphasizes the need to extend the research frames for the complex interactive processes of communication through their study in the multimodal aspect.

  12. Do Health Systems Have Consistent Performance Across Locations and Is Consistency Associated With Higher Performance?

    Science.gov (United States)

    Crespin, Daniel J; Christianson, Jon B; McCullough, Jeffrey S; Finch, Michael D

    This study addresses whether health systems have consistent diabetes care performance across their ambulatory clinics and whether increasing consistency is associated with improvements in clinic performance. Study data included 2007 to 2013 diabetes care intermediate outcome measures for 661 ambulatory clinics in Minnesota and bordering states. Health systems provided more consistent performance, as measured by the standard deviation of performance for clinics in a system, relative to propensity score-matched proxy systems created for comparison purposes. No evidence was found that improvements in consistency were associated with higher clinic performance. The combination of high performance and consistent care is likely to enhance a health system's brand reputation, allowing it to better mitigate the financial risks of consumers seeking care outside the organization. These results suggest that larger health systems are most likely to deliver the combination of consistent and high-performance care. Future research should explore the mechanisms that drive consistent care within health systems.

  13. Consistency argued students of fluid

    Science.gov (United States)

    Viyanti; Cari; Suparmi; Winarti; Slamet Budiarti, Indah; Handika, Jeffry; Widyastuti, Fatma

    2017-01-01

    Problem solving for physics concepts through consistency arguments can improve thinking skills of students and it is an important thing in science. The study aims to assess the consistency of the material Fluid student argmentation. The population of this study are College students PGRI Madiun, UIN Sunan Kalijaga Yogyakarta and Lampung University. Samples using cluster random sampling, 145 samples obtained by the number of students. The study used a descriptive survey method. Data obtained through multiple-choice test and interview reasoned. Problem fluid modified from [9] and [1]. The results of the study gained an average consistency argmentation for the right consistency, consistency is wrong, and inconsistent respectively 4.85%; 29.93%; and 65.23%. Data from the study have an impact on the lack of understanding of the fluid material which is ideally in full consistency argued affect the expansion of understanding of the concept. The results of the study as a reference in making improvements in future studies is to obtain a positive change in the consistency of argumentations.

  14. Consistent Individual Differences Drive Collective Behavior and Group Functioning of Schooling Fish.

    Science.gov (United States)

    Jolles, Jolle W; Boogert, Neeltje J; Sridhar, Vivek H; Couzin, Iain D; Manica, Andrea

    2017-09-25

    The ubiquity of consistent inter-individual differences in behavior ("animal personalities") [1, 2] suggests that they might play a fundamental role in driving the movements and functioning of animal groups [3, 4], including their collective decision-making, foraging performance, and predator avoidance. Despite increasing evidence that highlights their importance [5-16], we still lack a unified mechanistic framework to explain and to predict how consistent inter-individual differences may drive collective behavior. Here we investigate how the structure, leadership, movement dynamics, and foraging performance of groups can emerge from inter-individual differences by high-resolution tracking of known behavioral types in free-swimming stickleback (Gasterosteus aculeatus) shoals. We show that individual's propensity to stay near others, measured by a classic "sociability" assay, was negatively linked to swim speed across a range of contexts, and predicted spatial positioning and leadership within groups as well as differences in structure and movement dynamics between groups. In turn, this trait, together with individual's exploratory tendency, measured by a classic "boldness" assay, explained individual and group foraging performance. These effects of consistent individual differences on group-level states emerged naturally from a generic model of self-organizing groups composed of individuals differing in speed and goal-orientedness. Our study provides experimental and theoretical evidence for a simple mechanism to explain the emergence of collective behavior from consistent individual differences, including variation in the structure, leadership, movement dynamics, and functional capabilities of groups, across social and ecological scales. In addition, we demonstrate individual performance is conditional on group composition, indicating how social selection may drive behavioral differentiation between individuals. Copyright © 2017 The Author(s). Published by

  15. Self-consistent modelling of lattice strains during the in-situ tensile loading of twinning induced plasticity steel

    International Nuclear Information System (INIS)

    Saleh, Ahmed A.; Pereloma, Elena V.; Clausen, Bjørn; Brown, Donald W.; Tomé, Carlos N.; Gazder, Azdiar A.

    2014-01-01

    The evolution of lattice strains in a fully recrystallised Fe–24Mn–3Al–2Si–1Ni–0.06C TWinning Induced Plasticity (TWIP) steel subjected to uniaxial tensile loading up to a true strain of ∼35% was investigated via in-situ neutron diffraction. Typical of fcc elastic and plastic anisotropy, the {111} and {200} grain families record the lowest and highest lattice strains, respectively. Using modelling cases with and without latent hardening, the recently extended Elasto-Plastic Self-Consistent model successfully predicted the macroscopic stress–strain response, the evolution of lattice strains and the development of crystallographic texture. Compared to the isotropic hardening case, latent hardening did not have a significant effect on lattice strains and returned a relatively faster development of a stronger 〈111〉 and a weaker 〈100〉 double fibre parallel to the tensile axis. Close correspondence between the experimental lattice strains and those predicted using particular orientations embedded within a random aggregate was obtained. The result suggests that the exact orientations of the surrounding aggregate have a weak influence on the lattice strain evolution

  16. Fungal communities in wheat grain show significant co-existence patterns among species

    DEFF Research Database (Denmark)

    Nicolaisen, M.; Justesen, A. F.; Knorr, K.

    2014-01-01

    identified as ‘core’ OTUs as they were found in all or almost all samples and accounted for almost 99 % of all sequences. The remaining OTUs were only sporadically found and only in small amounts. Cluster and factor analyses showed patterns of co-existence among the core species. Cluster analysis grouped...... the 21 core OTUs into three clusters: cluster 1 consisting of saprotrophs, cluster 2 consisting mainly of yeasts and saprotrophs and cluster 3 consisting of wheat pathogens. Principal component extraction showed that the Fusarium graminearum group was inversely related to OTUs of clusters 1 and 2....

  17. Consistency of orthodox gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bellucci, S. [INFN, Frascati (Italy). Laboratori Nazionali di Frascati; Shiekh, A. [International Centre for Theoretical Physics, Trieste (Italy)

    1997-01-01

    A recent proposal for quantizing gravity is investigated for self consistency. The existence of a fixed-point all-order solution is found, corresponding to a consistent quantum gravity. A criterion to unify couplings is suggested, by invoking an application of their argument to more complex systems.

  18. Measuring performance at trade shows

    DEFF Research Database (Denmark)

    Hansen, Kåre

    2004-01-01

    Trade shows is an increasingly important marketing activity to many companies, but current measures of trade show performance do not adequately capture dimensions important to exhibitors. Based on the marketing literature's outcome and behavior-based control system taxonomy, a model is built...... that captures a outcome-based sales dimension and four behavior-based dimensions (i.e. information-gathering, relationship building, image building, and motivation activities). A 16-item instrument is developed for assessing exhibitors perceptions of their trade show performance. The paper presents evidence...

  19. Landscape evolution models using the stream power incision model show unrealistic behavior when m ∕ n equals 0.5

    Directory of Open Access Journals (Sweden)

    J. S. Kwang

    2017-12-01

    Full Text Available Landscape evolution models often utilize the stream power incision model to simulate river incision: E = KAmSn, where E is the vertical incision rate, K is the erodibility constant, A is the upstream drainage area, S is the channel gradient, and m and n are exponents. This simple but useful law has been employed with an imposed rock uplift rate to gain insight into steady-state landscapes. The most common choice of exponents satisfies m ∕ n = 0.5. Yet all models have limitations. Here, we show that when hillslope diffusion (which operates only on small scales is neglected, the choice m ∕ n = 0.5 yields a curiously unrealistic result: the predicted landscape is invariant to horizontal stretching. That is, the steady-state landscape for a 10 km2 horizontal domain can be stretched so that it is identical to the corresponding landscape for a 1000 km2 domain.

  20. Changes in forest productivity across Alaska consistent with biome shift.

    Science.gov (United States)

    Beck, Pieter S A; Juday, Glenn P; Alix, Claire; Barber, Valerie A; Winslow, Stephen E; Sousa, Emily E; Heiser, Patricia; Herriges, James D; Goetz, Scott J

    2011-04-01

    Global vegetation models predict that boreal forests are particularly sensitive to a biome shift during the 21st century. This shift would manifest itself first at the biome's margins, with evergreen forest expanding into current tundra while being replaced by grasslands or temperate forest at the biome's southern edge. We evaluated changes in forest productivity since 1982 across boreal Alaska by linking satellite estimates of primary productivity and a large tree-ring data set. Trends in both records show consistent growth increases at the boreal-tundra ecotones that contrast with drought-induced productivity declines throughout interior Alaska. These patterns support the hypothesized effects of an initiating biome shift. Ultimately, tree dispersal rates, habitat availability and the rate of future climate change, and how it changes disturbance regimes, are expected to determine where the boreal biome will undergo a gradual geographic range shift, and where a more rapid decline. © 2011 Blackwell Publishing Ltd/CNRS.

  1. A dynamic Thurstonian item response theory of motive expression in the picture story exercise: solving the internal consistency paradox of the PSE.

    Science.gov (United States)

    Lang, Jonas W B

    2014-07-01

    The measurement of implicit or unconscious motives using the picture story exercise (PSE) has long been a target of debate in the psychological literature. Most debates have centered on the apparent paradox that PSE measures of implicit motives typically show low internal consistency reliability on common indices like Cronbach's alpha but nevertheless predict behavioral outcomes. I describe a dynamic Thurstonian item response theory (IRT) model that builds on dynamic system theories of motivation, theorizing on the PSE response process, and recent advancements in Thurstonian IRT modeling of choice data. To assess the models' capability to explain the internal consistency paradox, I first fitted the model to archival data (Gurin, Veroff, & Feld, 1957) and then simulated data based on bias-corrected model estimates from the real data. Simulation results revealed that the average squared correlation reliability for the motives in the Thurstonian IRT model was .74 and that Cronbach's alpha values were similar to the real data (value of extant evidence from motivational research using PSE motive measures. (c) 2014 APA, all rights reserved.

  2. Self-Consistent Approach to Global Charge Neutrality in Electrokinetics: A Surface Potential Trap Model

    Directory of Open Access Journals (Sweden)

    Li Wan

    2014-03-01

    Full Text Available In this work, we treat the Poisson-Nernst-Planck (PNP equations as the basis for a consistent framework of the electrokinetic effects. The static limit of the PNP equations is shown to be the charge-conserving Poisson-Boltzmann (CCPB equation, with guaranteed charge neutrality within the computational domain. We propose a surface potential trap model that attributes an energy cost to the interfacial charge dissociation. In conjunction with the CCPB, the surface potential trap can cause a surface-specific adsorbed charge layer σ. By defining a chemical potential μ that arises from the charge neutrality constraint, a reformulated CCPB can be reduced to the form of the Poisson-Boltzmann equation, whose prediction of the Debye screening layer profile is in excellent agreement with that of the Poisson-Boltzmann equation when the channel width is much larger than the Debye length. However, important differences emerge when the channel width is small, so the Debye screening layers from the opposite sides of the channel overlap with each other. In particular, the theory automatically yields a variation of σ that is generally known as the “charge regulation” behavior, attendant with predictions of force variation as a function of nanoscale separation between two charged surfaces that are in good agreement with the experiments, with no adjustable or additional parameters. We give a generalized definition of the ζ potential that reflects the strength of the electrokinetic effect; its variations with the concentration of surface-specific and surface-nonspecific salt ions are shown to be in good agreement with the experiments. To delineate the behavior of the electro-osmotic (EO effect, the coupled PNP and Navier-Stokes equations are solved numerically under an applied electric field tangential to the fluid-solid interface. The EO effect is shown to exhibit an intrinsic time dependence that is noninertial in its origin. Under a step-function applied

  3. Robust Visual Tracking Via Consistent Low-Rank Sparse Learning

    KAUST Repository

    Zhang, Tianzhu

    2014-06-19

    Object tracking is the process of determining the states of a target in consecutive video frames based on properties of motion and appearance consistency. In this paper, we propose a consistent low-rank sparse tracker (CLRST) that builds upon the particle filter framework for tracking. By exploiting temporal consistency, the proposed CLRST algorithm adaptively prunes and selects candidate particles. By using linear sparse combinations of dictionary templates, the proposed method learns the sparse representations of image regions corresponding to candidate particles jointly by exploiting the underlying low-rank constraints. In addition, the proposed CLRST algorithm is computationally attractive since temporal consistency property helps prune particles and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25 challenging image sequences. Experimental results show that the CLRST algorithm performs favorably against state-of-the-art tracking methods in terms of accuracy and execution time.

  4. Best in show but not best shape: a photographic assessment of show dog body condition.

    Science.gov (United States)

    Such, Z R; German, A J

    2015-08-01

    Previous studies suggest that owners often wrongly perceive overweight dogs to be in normal condition. The body shape of dogs attending shows might influence owners' perceptions, with online images of overweight show winners having a negative effect. This was an observational in silico study of canine body condition. 14 obese-prone breeds and 14 matched non-obese-probe breeds were first selected, and one operator then used an online search engine to identify 40 images, per breed, of dogs that had appeared at a major national UK show (Crufts). After images were anonymised and coded, a second observer subjectively assessed body condition, in a single sitting, using a previously validated method. Of 1120 photographs initially identified, 960 were suitable for assessing body condition, with all unsuitable images being from longhaired breeds. None of the dogs (0 per cent) were underweight, 708 (74 per cent) were in ideal condition and 252 (26 per cent) were overweight. Pugs, basset hounds and Labrador retrievers were most likely to be overweight, while standard poodles, Rhodesian ridgebacks, Hungarian vizslas and Dobermanns were least likely to be overweight. Given the proportion of show dogs from some breeds that are overweight, breed standards should be redefined to be consistent with a dog in optimal body condition. British Veterinary Association.

  5. The speed of memory errors shows the influence of misleading information: Testing the diffusion model and discrete-state models.

    Science.gov (United States)

    Starns, Jeffrey J; Dubé, Chad; Frelinger, Matthew E

    2018-05-01

    In this report, we evaluate single-item and forced-choice recognition memory for the same items and use the resulting accuracy and reaction time data to test the predictions of discrete-state and continuous models. For the single-item trials, participants saw a word and indicated whether or not it was studied on a previous list. The forced-choice trials had one studied and one non-studied word that both appeared in the earlier single-item trials and both received the same response. Thus, forced-choice trials always had one word with a previous correct response and one with a previous error. Participants were asked to select the studied word regardless of whether they previously called both words "studied" or "not studied." The diffusion model predicts that forced-choice accuracy should be lower when the word with a previous error had a fast versus a slow single-item RT, because fast errors are associated with more compelling misleading memory retrieval. The two-high-threshold (2HT) model does not share this prediction because all errors are guesses, so error RT is not related to memory strength. A low-threshold version of the discrete state approach predicts an effect similar to the diffusion model, because errors are a mixture of responses based on misleading retrieval and guesses, and the guesses should tend to be slower. Results showed that faster single-trial errors were associated with lower forced-choice accuracy, as predicted by the diffusion and low-threshold models. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. A self-consistent model of a thermally balanced quiescent prominence in magnetostatic equilibrium in a uniform gravitational field

    International Nuclear Information System (INIS)

    Lerche, I.; Low, B.C.

    1977-01-01

    A theoretical model of quiescent prominences in the form of an infinite vertical sheet is presented. Self-consistent solutions are obtained by integrating simultaneously the set of nonlinear equations of magnetostatic equilibrium and thermal balance. The basic features of the models are: (1) The prominence matter is confined to a sheet and supported against gravity by a bowed magnetic field. (2) The thermal flux is channelled along magnetic field lines. (3) The thermal flux is everywhere balanced by Low's (1975) hypothetical heat sink which is proportional to the local density. (4) A constant component of the magnetic field along the length of the prominence shields the cool plasma from the hot surrounding. It is assumed that the prominence plasma emits more radiation than it absorbes from the radiation fields of the photosphere, chromosphere and corona, and the above hypothetical heat sink is interpreted to represent the amount of radiative loss that must be balanced by a nonradiative energy input. Using a central density and temperature of 10 11 particles cm -3 and 5000 K respectively, a magnetic field strength between 2 to 10 gauss and a thermal conductivity that varies linearly with temperature, the physical properties implied by the model are discussed. The analytic treatment can also be carried out for a class of more complex thermal conductivities. These models provide a useful starting point for investigating the combined requirements of magnetostatic equilibrium and thermal balance in the quiescent prominence. (Auth.)

  7. In situ neutron diffraction and Elastic–Plastic Self-Consistent polycrystal modeling of HT-9

    International Nuclear Information System (INIS)

    Clausen, B.; Brown, D.W.; Bourke, M.A.M.; Saleh, T.A.; Maloy, S.A.

    2012-01-01

    Qualifying materials for use in reactors with fluences greater than 200 dpa (displacements per atom) requires development of advanced alloys and irradiations in fast reactors to test these alloys. Research into the mechanical behavior of these materials under reactor conditions is ongoing. In order to probe changes in deformation mechanisms due to radiation in these materials, samples of HT-9 were tested in tension in situ on the SMARTS instrument at Los Alamos Neutron Science Center. Experimental results, confirmed with modeling, show significant load sharing between the carbides and parent phase of the steel beyond yield, displaying the critical role of carbides during deformation, along with basic texture development.

  8. In situ neutron diffraction and Elastic-Plastic Self-Consistent polycrystal modeling of HT-9

    Energy Technology Data Exchange (ETDEWEB)

    Clausen, B., E-mail: clausen@lanl.gov [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Brown, D.W.; Bourke, M.A.M.; Saleh, T.A.; Maloy, S.A. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

    2012-06-15

    Qualifying materials for use in reactors with fluences greater than 200 dpa (displacements per atom) requires development of advanced alloys and irradiations in fast reactors to test these alloys. Research into the mechanical behavior of these materials under reactor conditions is ongoing. In order to probe changes in deformation mechanisms due to radiation in these materials, samples of HT-9 were tested in tension in situ on the SMARTS instrument at Los Alamos Neutron Science Center. Experimental results, confirmed with modeling, show significant load sharing between the carbides and parent phase of the steel beyond yield, displaying the critical role of carbides during deformation, along with basic texture development.

  9. Effective Field Theories and the Role of Consistency in Theory Choice

    CERN Document Server

    Wells, James D

    2012-01-01

    Promoting a theory with a finite number of terms into an effective field theory with an infinite number of terms worsens simplicity, predictability, falsifiability, and other attributes often favored in theory choice. However, the importance of these attributes pales in comparison with consistency, both observational and mathematical consistency, which propels the effective theory to be superior to its simpler truncated version of finite terms, whether that theory be renormalizable (e.g., Standard Model of particle physics) or nonrenormalizable (e.g., gravity). Some implications for the Large Hadron Collider and beyond are discussed, including comments on how directly acknowledging the preeminence of consistency can affect future theory work.

  10. Small GSK-3 Inhibitor Shows Efficacy in a Motor Neuron Disease Murine Model Modulating Autophagy.

    Directory of Open Access Journals (Sweden)

    Estefanía de Munck

    Full Text Available Amyotrophic lateral sclerosis (ALS is a progressive motor neuron degenerative disease that has no effective treatment up to date. Drug discovery tasks have been hampered due to the lack of knowledge in its molecular etiology together with the limited animal models for research. Recently, a motor neuron disease animal model has been developed using β-N-methylamino-L-alanine (L-BMAA, a neurotoxic amino acid related to the appearing of ALS. In the present work, the neuroprotective role of VP2.51, a small heterocyclic GSK-3 inhibitor, is analysed in this novel murine model together with the analysis of autophagy. VP2.51 daily administration for two weeks, starting the first day after L-BMAA treatment, leads to total recovery of neurological symptoms and prevents the activation of autophagic processes in rats. These results show that the L-BMAA murine model can be used to test the efficacy of new drugs. In addition, the results confirm the therapeutic potential of GSK-3 inhibitors, and specially VP2.51, for the disease-modifying future treatment of motor neuron disorders like ALS.

  11. Internal consistency of the CHAMPS physical activity questionnaire for Spanish speaking older adults.

    Science.gov (United States)

    Rosario, Martín G; Vázquez, Jenniffer M; Cruz, Wanda I; Ortiz, Alexis

    2008-09-01

    The Community Healthy Activities Model Program for Seniors (CHAMPS) is a physical activity monitoring questionnaire for people between 65 to 90 years old. This questionnaire has been previously translated to Spanish to be used in the Latin American population. To adapt the Spanish version of the CHAMPS questionnaire to Puerto Rico and assess its internal consistency. An external review committee adapted the existent Spanish version of the CHAMPS to be used in the Puerto Rican population. Three older adults participated in a second phase with the purpose of training the research team. After the second phase, 35 older adults participated in a third content adaptation phase. During the third phase, the preliminary Spanish version for Puerto Rico of the CHAMPS was given to the 35 participants to assess for clarity, vocabulary and understandability. Interviews to each participant in the third phase were carried out to obtain feedback and create a final Spanish version of the CHAMPS for Puerto Rico. After analyses of this phase, the external review committee prepared a final Spanish version of the CHAMPS for Puerto Rico. The final version was administered to 15 older adults (76 +/- 6.5 years) to assess the internal consistency by using Cronbach's Alpha analysis. The questionnaire showed a strong internal consistency of 0.76. The total time to answer the questionnaire was 17.4 minutes. The Spanish version of the CHAMPS questionnaire for Puerto Rico suggested being an easy to administer and consistent measurement tool to assess physical activity in older adults.

  12. Plot showing ATLAS limits on Standard Model Higgs production in the mass range 100-600 GeV

    CERN Multimedia

    ATLAS Collaboration

    2011-01-01

    The combined upper limit on the Standard Model Higgs boson production cross section divided by the Standard Model expectation as a function of mH is indicated by the solid line. This is a 95% CL limit using the CLs method in the entire mass range. The dotted line shows the median expected limit in the absence of a signal and the green and yellow bands reflect the corresponding 68% and 95% expected

  13. Plot showing ATLAS limits on Standard Model Higgs production in the mass range 110-150 GeV

    CERN Multimedia

    ATLAS Collaboration

    2011-01-01

    The combined upper limit on the Standard Model Higgs boson production cross section divided by the Standard Model expectation as a function of mH is indicated by the solid line. This is a 95% CL limit using the CLs method in in the low mass range. The dotted line shows the median expected limit in the absence of a signal and the green and yellow bands reflect the corresponding 68% and 95% expected

  14. Vibrational multiconfiguration self-consistent field theory: implementation and test calculations.

    Science.gov (United States)

    Heislbetz, Sandra; Rauhut, Guntram

    2010-03-28

    A state-specific vibrational multiconfiguration self-consistent field (VMCSCF) approach based on a multimode expansion of the potential energy surface is presented for the accurate calculation of anharmonic vibrational spectra. As a special case of this general approach vibrational complete active space self-consistent field calculations will be discussed. The latter method shows better convergence than the general VMCSCF approach and must be considered the preferred choice within the multiconfigurational framework. Benchmark calculations are provided for a small set of test molecules.

  15. OGLE-2013-SN-079: A LONELY SUPERNOVA CONSISTENT WITH A HELIUM SHELL DETONATION

    International Nuclear Information System (INIS)

    Inserra, C.; Sim, S. A.; Smartt, S. J.; Nicholl, M.; Jerkstrand, A.; Chen, T.-W.; Wyrzykowski, L.; Fraser, M.; Blagorodnova, N.; Campbell, H.; Shen, K. J.; Gal-Yam, A.; Howell, D. A.; Valenti, S.; Maguire, K.; Mazzali, P.; Bersier, D.; Taubenberger, S.; Benitez-Herrera, S.; Elias-Rosa, N.

    2015-01-01

    We present observational data for a peculiar supernova discovered by the OGLE-IV survey and followed by the Public ESO Spectroscopic Survey for Transient Objects. The inferred redshift of z = 0.07 implies an absolute magnitude in the rest-frame I-band of M I ∼ –17.6 mag. This places it in the luminosity range between normal Type Ia SNe and novae. Optical and near infrared spectroscopy reveal mostly Ti and Ca lines, and an unusually red color arising from strong depression of flux at rest wavelengths <5000 Å. To date, this is the only reported SN showing Ti-dominated spectra. The data are broadly consistent with existing models for the pure detonation of a helium shell around a low-mass CO white dwarf and ''double-detonation'' models that include a secondary detonation of a CO core following a primary detonation in an overlying helium shell

  16. OGLE-2013-SN-079: A LONELY SUPERNOVA CONSISTENT WITH A HELIUM SHELL DETONATION

    Energy Technology Data Exchange (ETDEWEB)

    Inserra, C.; Sim, S. A.; Smartt, S. J.; Nicholl, M.; Jerkstrand, A.; Chen, T.-W. [Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN (United Kingdom); Wyrzykowski, L. [University of Warsaw, Astronomical Observatory, Al. Ujazdowskie 400-478 Warszawa (Poland); Fraser, M.; Blagorodnova, N.; Campbell, H. [Institute of Astronomy, University of Cambridge, Madingley Road, CB3 0HA Cambridge (United Kingdom); Shen, K. J. [Department of Astronomy and Theoretical Astrophysics Center, University of California, Berkeley, CA 94720 (United States); Gal-Yam, A. [Benoziyo Center for Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Howell, D. A.; Valenti, S. [Las Cumbres Observatory Global Telescope Network, 6740 Cortona Drive, Suite 102 Goleta, CA 93117 (United States); Maguire, K. [European Southern Observatory for Astronomical Research in the Southern Hemisphere (ESO), Karl-Schwarzschild-Str. 2, 85748 Garching b. Munchen (Germany); Mazzali, P.; Bersier, D. [Astrophysics Research Institute, Liverpool John Moores University, Liverpool (United Kingdom); Taubenberger, S.; Benitez-Herrera, S. [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany); Elias-Rosa, N., E-mail: c.inserra@qub.ac.uk [INAF - Osservatorio Astronomico di Padova, Vicolo dell' Osservatorio 5, I-35122 Padova (Italy); and others

    2015-01-20

    We present observational data for a peculiar supernova discovered by the OGLE-IV survey and followed by the Public ESO Spectroscopic Survey for Transient Objects. The inferred redshift of z = 0.07 implies an absolute magnitude in the rest-frame I-band of M{sub I} ∼ –17.6 mag. This places it in the luminosity range between normal Type Ia SNe and novae. Optical and near infrared spectroscopy reveal mostly Ti and Ca lines, and an unusually red color arising from strong depression of flux at rest wavelengths <5000 Å. To date, this is the only reported SN showing Ti-dominated spectra. The data are broadly consistent with existing models for the pure detonation of a helium shell around a low-mass CO white dwarf and ''double-detonation'' models that include a secondary detonation of a CO core following a primary detonation in an overlying helium shell.

  17. Renormalization in self-consistent approximation schemes at finite temperature I: theory

    International Nuclear Information System (INIS)

    Hees, H. van; Knoll, J.

    2001-07-01

    Within finite temperature field theory, we show that truncated non-perturbative self-consistent Dyson resummation schemes can be renormalized with local counter-terms defined at the vacuum level. The requirements are that the underlying theory is renormalizable and that the self-consistent scheme follows Baym's Φ-derivable concept. The scheme generates both, the renormalized self-consistent equations of motion and the closed equations for the infinite set of counter terms. At the same time the corresponding 2PI-generating functional and the thermodynamic potential can be renormalized, in consistency with the equations of motion. This guarantees the standard Φ-derivable properties like thermodynamic consistency and exact conservation laws also for the renormalized approximation scheme to hold. The proof uses the techniques of BPHZ-renormalization to cope with the explicit and the hidden overlapping vacuum divergences. (orig.)

  18. On Consistency Test Method of Expert Opinion in Ecological Security Assessment.

    Science.gov (United States)

    Gong, Zaiwu; Wang, Lihong

    2017-09-04

    To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert's individual judgment level, ability and the consistency of the expert's overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment.

  19. Factorial Validity and Internal Consistency of the Motivational Climate in Physical Education Scale

    Directory of Open Access Journals (Sweden)

    Markus Soini

    2014-03-01

    Full Text Available The aim of the study was to examine the construct validity and internal consistency of the Motivational Climate in Physical Education Scale (MCPES. A key element of the development process of the scale was establishing a theoretical framework that integrated the dimensions of task- and ego involving climates in conjunction with autonomy, and social relatedness supporting climates. These constructs were adopted from the self-determination and achievement goal theories. A sample of Finnish Grade 9 students, comprising 2,594 girls and 1,803 boys, completed the 18-item MCPES during one physical education class. The results of the study demonstrated that participants had highest mean in task-involving climate and the lowest in autonomy climate and ego-involving climate. Additionally, autonomy, social relatedness, and task- involving climates were significantly and strongly correlated with each other, whereas the ego- involving climate had low or negligible correlations with the other climate dimensions.The construct validity of the MCPES was analyzed using confirmatory factor analysis. The statistical fit of the four-factor model consisting of motivational climate factors supporting perceived autonomy, social relatedness, task-involvement, and ego-involvement was satisfactory. The results of the reliability analysis showed acceptable internal consistencies for all four dimensions. The Motivational Climate in Physical Education Scale can be considered as psychometrically valid tool to measure motivational climate in Finnish Grade 9 students.

  20. Toward a consistent random phase approximation based on the relativistic Hartree approximation

    International Nuclear Information System (INIS)

    Price, C.E.; Rost, E.; Shepard, J.R.; McNeil, J.A.

    1992-01-01

    We examine the random phase approximation (RPA) based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the one-loop level. We emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e,e') quasielastic response. We also study the effect of imposing a three-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin-orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m * /m. Consistency is much more important than the cutoff in the description of low-lying collective levels. The cutoff model also provides excellent agreement with quasielastic (e,e') data