WorldWideScience

Sample records for theoretical quantitative model

  1. A theoretical model on surface electronic behavior: Strain effect

    International Nuclear Information System (INIS)

    Qin, W.G.; Shaw, D.

    2009-01-01

    Deformation from mechanical loading can affect surface electronic behavior. Surface deformation and electronic behavior can be quantitatively expressed using strain and work function, respectively, and their experimental relationship can be readily determined using the Kelvin probing technique. However, the theoretical correlation between work function and strain has been unclear. This study reports our theoretical exploration, for the first time, of the effect of strain on work function. We propose a simple electrostatic action model by considering the effect of a dislocation on work function of a one-dimensional lattice and further extend this model to the complex conditions for the effect of dislocation density. Based on this model, we established successfully a theoretical correlation between work function and strain.

  2. Quantitative Myocardial Perfusion with Dynamic Contrast-Enhanced Imaging in MRI and CT: Theoretical Models and Current Implementation

    Directory of Open Access Journals (Sweden)

    G. J. Pelgrim

    2016-01-01

    Full Text Available Technological advances in magnetic resonance imaging (MRI and computed tomography (CT, including higher spatial and temporal resolution, have made the prospect of performing absolute myocardial perfusion quantification possible, previously only achievable with positron emission tomography (PET. This could facilitate integration of myocardial perfusion biomarkers into the current workup for coronary artery disease (CAD, as MRI and CT systems are more widely available than PET scanners. Cardiac PET scanning remains expensive and is restricted by the requirement of a nearby cyclotron. Clinical evidence is needed to demonstrate that MRI and CT have similar accuracy for myocardial perfusion quantification as PET. However, lack of standardization of acquisition protocols and tracer kinetic model selection complicates comparison between different studies and modalities. The aim of this overview is to provide insight into the different tracer kinetic models for quantitative myocardial perfusion analysis and to address typical implementation issues in MRI and CT. We compare different models based on their theoretical derivations and present the respective consequences for MRI and CT acquisition parameters, highlighting the interplay between tracer kinetic modeling and acquisition settings.

  3. Theoretical aspects of spatial-temporal modeling

    CERN Document Server

    Matsui, Tomoko

    2015-01-01

    This book provides a modern introductory tutorial on specialized theoretical aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter provides up-to-date coverage of particle association measures that underpin the theoretical properties of recently developed random set methods in space and time otherwise known as the class of probability hypothesis density framework (PHD filters). The second chapter gives an overview of recent advances in Monte Carlo methods for Bayesian filtering in high-dimensional spaces. In particular, the chapter explains how one may extend classical sequential Monte Carlo methods for filtering and static inference problems to high dimensions and big-data applications. The third chapter presents an overview of generalized families of processes that extend the class of Gaussian process models to heavy-tailed families known as alph...

  4. Quantitative Evaluation of Performance in Interventional Neuroradiology: An Integrated Curriculum Featuring Theoretical and Practical Challenges.

    Directory of Open Access Journals (Sweden)

    Marielle Ernst

    Full Text Available We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists.We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model. Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed.In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling.Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time.

  5. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  6. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  7. A quantitative phase field model for hydride precipitation in zirconium alloys: Part I. Development of quantitative free energy functional

    International Nuclear Information System (INIS)

    Shi, San-Qiang; Xiao, Zhihua

    2015-01-01

    A temperature dependent, quantitative free energy functional was developed for the modeling of hydride precipitation in zirconium alloys within a phase field scheme. The model takes into account crystallographic variants of hydrides, interfacial energy between hydride and matrix, interfacial energy between hydrides, elastoplastic hydride precipitation and interaction with externally applied stress. The model is fully quantitative in real time and real length scale, and simulation results were compared with limited experimental data available in the literature with a reasonable agreement. The work calls for experimental and/or theoretical investigations of some of the key material properties that are not yet available in the literature

  8. A study of insider threat in nuclear security analysis using game theoretic modeling

    International Nuclear Information System (INIS)

    Kim, Kyo-Nam; Yim, Man-Sung; Schneider, Erich

    2017-01-01

    Highlights: • Implications of an insider threat in nuclear security were quantitatively analyzed. • The analysis was based on of a hypothetical nuclear facility and using game theoretic approach. • Through a sensitivity analysis, vulnerable paths and important parameters were identified. • The methodology can be utilized to prioritize the implementation of PPS improvements in a facility. - Abstract: An Insider poses a greater threat to the security system of a nuclear power plant (NPP) because of their ability to take advantage of their access rights and knowledge of a facility, to bypass dedicated security measures. If an insider colludes with an external terrorist group, this poses a key threat to the safety-security interface. However, despite the importance of the insider threat, few studies have been conducted to quantitatively analyze an insider threat. This research examines the quantitative framework for investigating the implications of insider threat, taking a novel approach. Conventional tools assessing the security threats to nuclear facilities focus on a limited number of attack pathways. These are defined by the modeler and are based on simple probabilistic calculations. They do not capture the adversary’s intentions nor do they account for their response and adaptation to defensive investments. As an alternative way of performing physical protection analysis, this research explores the use of game theoretic modeling of Physical Protection Systems (PPS) analysis by incorporating the implications of an insider threat, to address the issues of intentionality and interactions. The game theoretic approach has the advantage of modeling an intelligent adversary and insider who has an intention to do harm and complete knowledge of the facility. Through a quantitative assessment and sensitivity analysis, vulnerable but important parameters in this model were identified. This made it possible to determine which insider threat is more important. The

  9. Theoretical Compartment Modeling of DCE-MRI Data Based on the Transport across Physiological Barriers in the Brain

    Directory of Open Access Journals (Sweden)

    Laura Fanea

    2012-01-01

    Full Text Available Neurological disorders represent major causes of lost years of healthy life and mortality worldwide. Development of their quantitative interdisciplinary in vivo evaluation is required. Compartment modeling (CM of brain data acquired in vivo using magnetic resonance imaging techniques with clinically available contrast agents can be performed to quantitatively assess brain perfusion. Transport of 1H spins in water molecules across physiological compartmental brain barriers in three different pools was mathematically modeled and theoretically evaluated in this paper and the corresponding theoretical compartment modeling of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI data was analyzed. The pools considered were blood, tissue, and cerebrospinal fluid (CSF. The blood and CSF data were mathematically modeled assuming continuous flow of the 1H spins in these pools. Tissue data was modeled using three CMs. Results in this paper show that transport across physiological brain barriers such as the blood to brain barrier, the extracellular space to the intracellular space barrier, or the blood to CSF barrier can be evaluated quantitatively. Statistical evaluations of this quantitative information may be performed to assess tissue perfusion, barriers' integrity, and CSF flow in vivo in the normal or disease-affected brain or to assess response to therapy.

  10. Toward a Theoretical Model of Decision-Making and Resistance to Change among Higher Education Online Course Designers

    Science.gov (United States)

    Dodd, Bucky J.

    2013-01-01

    Online course design is an emerging practice in higher education, yet few theoretical models currently exist to explain or predict how the diffusion of innovations occurs in this space. This study used a descriptive, quantitative survey research design to examine theoretical relationships between decision-making style and resistance to change…

  11. Qualitative and quantitative combined nonlinear dynamics model and its application in analysis of price, supply–demand ratio and selling rate

    International Nuclear Information System (INIS)

    Zhu, Dingju

    2016-01-01

    The qualitative and quantitative combined nonlinear dynamics model proposed in this paper fill the gap in nonlinear dynamics model in terms of qualitative and quantitative combined methods, allowing the qualitative model and quantitative model to perfectly combine and overcome their weaknesses by learning from each other. These two types of models use their strengths to make up for the other’s deficiencies. The qualitative and quantitative combined models can surmount the weakness that the qualitative model cannot be applied and verified in a quantitative manner, and the high costs and long time of multiple construction as well as verification of the quantitative model. The combined model is more practical and efficient, which is of great significance for nonlinear dynamics. The qualitative and quantitative combined modeling and model analytical method raised in this paper is not only applied to nonlinear dynamics, but can be adopted and drawn on in the modeling and model analysis of other fields. Additionally, the analytical method of qualitative and quantitative combined nonlinear dynamics model proposed in this paper can satisfactorily resolve the problems with the price system’s existing nonlinear dynamics model analytical method. The three-dimensional dynamics model of price, supply–demand ratio and selling rate established in this paper make estimates about the best commodity prices using the model results, thereby providing a theoretical basis for the government’s macro-control of price. Meanwhile, this model also offer theoretical guidance to how to enhance people’s purchasing power and consumption levels through price regulation and hence to improve people’s living standards.

  12. Theory and Practice in Quantitative Genetics

    DEFF Research Database (Denmark)

    Posthuma, Daniëlle; Beem, A Leo; de Geus, Eco J C

    2003-01-01

    With the rapid advances in molecular biology, the near completion of the human genome, the development of appropriate statistical genetic methods and the availability of the necessary computing power, the identification of quantitative trait loci has now become a realistic prospect for quantitative...... geneticists. We briefly describe the theoretical biometrical foundations underlying quantitative genetics. These theoretical underpinnings are translated into mathematical equations that allow the assessment of the contribution of observed (using DNA samples) and unobserved (using known genetic relationships......) genetic variation to population variance in quantitative traits. Several statistical models for quantitative genetic analyses are described, such as models for the classical twin design, multivariate and longitudinal genetic analyses, extended twin analyses, and linkage and association analyses. For each...

  13. A quantitative dynamic systems model of health-related quality of life among older adults

    Science.gov (United States)

    Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela

    2015-01-01

    Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722

  14. Franchise Business Model: Theoretical Insights

    OpenAIRE

    Levickaitė, Rasa; Reimeris, Ramojus

    2010-01-01

    The article is based on literature review, theoretical insights, and deals with the topic of franchise business model. The objective of the paper is to analyse peculiarities of franchise business model and its developing conditions in Lithuania. The aim of the paper is to make an overview on franchise business model and its environment in Lithuanian business context. The overview is based on international and local theoretical insights. In terms of practical meaning, this article should be re...

  15. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  16. Towards a Theoretical Construct for Modelling Smallholders’ Forestland-Use Decisions: What Can We Learn from Agriculture and Forest Economics?

    Directory of Open Access Journals (Sweden)

    Kahlil Baker

    2017-09-01

    Full Text Available Academic research on smallholders’ forestland-use decisions is regularly addressed in different streams of literature using different theoretical constructs that are independently incomplete. In this article, we propose a theoretical construct for modelling smallholders’ forestland-use decisions intended to serve in the guidance and operationalization of future models for quantitative analysis. Our construct is inspired by the sub-disciplines of forestry and agricultural economics with a crosscutting theme of how transaction costs drive separability between consumption and production decisions. Our results help explain why exogenous variables proposed in the existing literature are insufficient at explaining smallholders’ forestland-use decisions, and provide theoretical context for endogenizing characteristics of the household, farm and landscape. Smallholders’ forestland-use decisions are best understood in an agricultural context of competing uses for household assets and interdependent consumption and production decisions. Forest production strategies range from natural regeneration to intensive management of the forest resource to co-jointly produce market and non-market values. Due to transaction costs, decision prices are best represented by their shadow as opposed to market prices. Shadow prices are shaped by endogenous smallholder-specific preferences for leisure, non-market values, time, risk, and uncertainty. Our proposed construct is intended to provide a theoretical basis to assist modellers in the selection of variables for quantitative analysis.

  17. Graph theoretical model of a sensorimotor connectome in zebrafish.

    Science.gov (United States)

    Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan

    2012-01-01

    Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.

  18. Theoretical model of intravascular paramagnetic tracers effect on tissue relaxation

    DEFF Research Database (Denmark)

    Kjølby, Birgitte Fuglsang; Østergaard, Leif; Kiselev, Valerij G

    2006-01-01

    The concentration of MRI tracers cannot be measured directly by MRI and is commonly evaluated indirectly using their relaxation effect. This study develops a comprehensive theoretical model to describe the transverse relaxation in perfused tissue caused by intravascular tracers. The model takes...... into account a number of individual compartments. The signal dephasing is simulated in a semianalytical way by embedding Monte Carlo simulations in the framework of analytical theory. This approach yields a tool for fast, realistic simulation of the change in the transverse relaxation. The results indicate...... with bulk blood. The enhancement of relaxation in tissue is due to the contrast in magnetic susceptibility between blood vessels and parenchyma induced by the presence of paramagnetic tracer. Beyond the perfusion measurements, the results can be applied to quantitation of functional MRI and to vessel size...

  19. 10th Colloquium on Theoretical and Quantitative Geography 6-11th September 1997

    Directory of Open Access Journals (Sweden)

    1997-09-01

    Full Text Available After Strasbourg, 1978, Cambridge, 1980, Augsburg, 1982, Veldhoven, 1985, Bardonechia, 1987, Chantilly, 1989, Stockholm, 1991, Budapest, 1993, Spa, 1995, the 10th Colloquium on Theoretical and Quantitative Geography was held in Rostock, Germany, from 6 to 11th September 1997. The local organizer was Otti Margraf, from Leipzig University. We can hardly convey an idea of the atmosphere which illuminated our pilgrimage to Von Thünen’s farm in Tellow, a central place for geographers! But you will...

  20. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  1. CO2 laser with modulated losses: Theoretical models and experiments in the chaotic regime

    International Nuclear Information System (INIS)

    Pando L, C.L.; Meucci, R.; Ciofini, M.; Arecchi, F.T.

    1993-04-01

    We compare two different theoretical models for a CO 2 laser, namely the two-and four-level model, and show that the second one traces with much better accuracy the experimental behavior in the case of a chaotic dynamics due to time modulation of the cavity losses. Even though the two-level model provides a qualitative explanation of the chaotic dynamics, only the four-level one assures a quantitative fitting. We also show that, at the onset of chaos, the chaotic dynamics is low dimensional and can be described in terms of a noninvertible unidimensional map. (author). 12 refs, 8 figs, 2 tabs

  2. Quantitative fluorescence lifetime spectroscopy in turbid media: comparison of theoretical, experimental and computational methods

    International Nuclear Information System (INIS)

    Vishwanath, Karthik; Mycek, Mary-Ann; Pogue, Brian

    2002-01-01

    A Monte Carlo model developed to simulate time-resolved fluorescence propagation in a semi-infinite turbid medium was validated against previously reported theoretical and computational results. Model simulations were compared to experimental measurements of fluorescence spectra and lifetimes on tissue-simulating phantoms for single and dual fibre-optic probe geometries. Experiments and simulations using a single probe revealed that scattering-induced artefacts appeared in fluorescence emission spectra, while fluorescence lifetimes were unchanged. Although fluorescence lifetime measurements are generally more robust to scattering artefacts than are measurements of fluorescence spectra, in the dual-probe geometry scattering-induced changes in apparent lifetime were predicted both from diffusion theory and via Monte Carlo simulation, as well as measured experimentally. In all cases, the recovered apparent lifetime increased with increasing scattering and increasing source-detector separation. Diffusion theory consistently underestimated the magnitude of these increases in apparent lifetime (predicting a maximum increase of ∼15%), while Monte Carlo simulations and experiment were closely matched (showing increases as large as 30%). These results indicate that quantitative simulations of time-resolved fluorescence propagation in turbid media will be important for accurate recovery of fluorophore lifetimes in biological spectroscopy and imaging applications. (author)

  3. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  4. Graph theoretical model of a sensorimotor connectome in zebrafish.

    Directory of Open Access Journals (Sweden)

    Michael Stobb

    Full Text Available Mapping the detailed connectivity patterns (connectomes of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.

  5. Quantitative structure–activity relationship model for amino acids as corrosion inhibitors based on the support vector machine and molecular design

    International Nuclear Information System (INIS)

    Zhao, Hongxia; Zhang, Xiuhui; Ji, Lin; Hu, Haixiang; Li, Qianshu

    2014-01-01

    Highlights: • Nonlinear quantitative structure–activity relationship (QSAR) model was built by the support vector machine. • Descriptors for QSAR model were selected by principal component analysis. • Binding energy was taken as one of the descriptors for QSAR model. • Acidic solution and protonation of the inhibitor were considered. - Abstract: The inhibition performance of nineteen amino acids was studied by theoretical methods. The affection of acidic solution and protonation of inhibitor were considered in molecular dynamics simulation and the results indicated that the protonated amino-group was not adsorbed on Fe (1 1 0) surface. Additionally, a nonlinear quantitative structure–activity relationship (QSAR) model was built by the support vector machine. The correlation coefficient was 0.97 and the root mean square error, the differences between predicted and experimental inhibition efficiencies (%), was 1.48. Furthermore, five new amino acids were theoretically designed and their inhibition efficiencies were predicted by the built QSAR model

  6. Assessing a Theoretical Model on EFL College Students

    Science.gov (United States)

    Chang, Yu-Ping

    2011-01-01

    This study aimed to (1) integrate relevant language learning models and theories, (2) construct a theoretical model of college students' English learning performance, and (3) assess the model fit between empirically observed data and the theoretical model proposed by the researchers of this study. Subjects of this study were 1,129 Taiwanese EFL…

  7. NMR relaxation induced by iron oxide particles: testing theoretical models.

    Science.gov (United States)

    Gossuin, Y; Orlando, T; Basini, M; Henrard, D; Lascialfari, A; Mattea, C; Stapf, S; Vuong, Q L

    2016-04-15

    Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water.

  8. Quantitative reactive modeling and verification.

    Science.gov (United States)

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  9. Applied quantitative finance

    CERN Document Server

    Chen, Cathy; Overbeck, Ludger

    2017-01-01

    This volume provides practical solutions and introduces recent theoretical developments in risk management, pricing of credit derivatives, quantification of volatility and copula modeling. This third edition is devoted to modern risk analysis based on quantitative methods and textual analytics to meet the current challenges in banking and finance. It includes 14 new contributions and presents a comprehensive, state-of-the-art treatment of cutting-edge methods and topics, such as collateralized debt obligations, the high-frequency analysis of market liquidity, and realized volatility. The book is divided into three parts: Part 1 revisits important market risk issues, while Part 2 introduces novel concepts in credit risk and its management along with updated quantitative methods. The third part discusses the dynamics of risk management and includes risk analysis of energy markets and for cryptocurrencies. Digital assets, such as blockchain-based currencies, have become popular b ut are theoretically challenging...

  10. Algebraic Specifications, Higher-order Types and Set-theoretic Models

    DEFF Research Database (Denmark)

    Kirchner, Hélène; Mosses, Peter David

    2001-01-01

    , and power-sets. This paper presents a simple framework for algebraic specifications with higher-order types and set-theoretic models. It may be regarded as the basis for a Horn-clause approximation to the Z framework, and has the advantage of being amenable to prototyping and automated reasoning. Standard......In most algebraic  specification frameworks, the type system is restricted to sorts, subsorts, and first-order function types. This is in marked contrast to the so-called model-oriented frameworks, which provide higer-order types, interpreted set-theoretically as Cartesian products, function spaces...... set-theoretic models are considered, and conditions are given for the existence of initial reduct's of such models. Algebraic specifications for various set-theoretic concepts are considered....

  11. Quantitative Modeling of Landscape Evolution, Treatise on Geomorphology

    NARCIS (Netherlands)

    Temme, A.J.A.M.; Schoorl, J.M.; Claessens, L.F.G.; Veldkamp, A.; Shroder, F.S.

    2013-01-01

    This chapter reviews quantitative modeling of landscape evolution – which means that not just model studies but also modeling concepts are discussed. Quantitative modeling is contrasted with conceptual or physical modeling, and four categories of model studies are presented. Procedural studies focus

  12. Operations management research methodologies using quantitative modeling

    NARCIS (Netherlands)

    Bertrand, J.W.M.; Fransoo, J.C.

    2002-01-01

    Gives an overview of quantitative model-based research in operations management, focusing on research methodology. Distinguishes between empirical and axiomatic research, and furthermore between descriptive and normative research. Presents guidelines for doing quantitative model-based research in

  13. A Simple theoretical model for 63Ni betavoltaic battery

    International Nuclear Information System (INIS)

    ZUO, Guoping; ZHOU, Jianliang; KE, Guotu

    2013-01-01

    A numerical simulation of the energy deposition distribution in semiconductors is performed for 63 Ni beta particles. Results show that the energy deposition distribution exhibits an approximate exponential decay law. A simple theoretical model is developed for 63 Ni betavoltaic battery based on the distribution characteristics. The correctness of the model is validated by two literature experiments. Results show that the theoretical short-circuit current agrees well with the experimental results, and the open-circuit voltage deviates from the experimental results in terms of the influence of the PN junction defects and the simplification of the source. The theoretical model can be applied to 63 Ni and 147 Pm betavoltaic batteries. - Highlights: • The energy deposition distribution is found following an approximate exponential decay law when beta particles emitted from 63 Ni pass through a semiconductor. • A simple theoretical model for 63 Ni betavoltaic battery is constructed based on the exponential decay law. • Theoretical model can be applied to the betavoltaic batteries which radioactive source has a similar energy spectrum with 63 Ni, such as 147 Pm

  14. Theoretical approach on microscopic bases of stochastic functional self-organization: quantitative measures of the organizational degree of the environment

    Energy Technology Data Exchange (ETDEWEB)

    Oprisan, Sorinel Adrian [Department of Psychology, University of New Orleans, New Orleans, LA (United States)]. E-mail: soprisan@uno.edu

    2001-11-30

    There has been increased theoretical and experimental research interest in autonomous mobile robots exhibiting cooperative behaviour. This paper provides consistent quantitative measures of organizational degree of a two-dimensional environment. We proved, by the way of numerical simulations, that the theoretically derived values of the feature are reliable measures of aggregation degree. The slope of the feature's dependence on memory radius leads to an optimization criterion for stochastic functional self-organization. We also described the intellectual heritages that have guided our research, as well as possible future developments. (author)

  15. Quantitative structure-activation barrier relationship modeling for Diels-Alder ligations utilizing quantum chemical structural descriptors.

    Science.gov (United States)

    Nandi, Sisir; Monesi, Alessandro; Drgan, Viktor; Merzel, Franci; Novič, Marjana

    2013-10-30

    In the present study, we show the correlation of quantum chemical structural descriptors with the activation barriers of the Diels-Alder ligations. A set of 72 non-catalysed Diels-Alder reactions were subjected to quantitative structure-activation barrier relationship (QSABR) under the framework of theoretical quantum chemical descriptors calculated solely from the structures of diene and dienophile reactants. Experimental activation barrier data were obtained from literature. Descriptors were computed using Hartree-Fock theory using 6-31G(d) basis set as implemented in Gaussian 09 software. Variable selection and model development were carried out by stepwise multiple linear regression methodology. Predictive performance of the quantitative structure-activation barrier relationship (QSABR) model was assessed by training and test set concept and by calculating leave-one-out cross-validated Q2 and predictive R2 values. The QSABR model can explain and predict 86.5% and 80% of the variances, respectively, in the activation energy barrier training data. Alternatively, a neural network model based on back propagation of errors was developed to assess the nonlinearity of the sought correlations between theoretical descriptors and experimental reaction barriers. A reasonable predictability for the activation barrier of the test set reactions was obtained, which enabled an exploration and interpretation of the significant variables responsible for Diels-Alder interaction between dienes and dienophiles. Thus, studies in the direction of QSABR modelling that provide efficient and fast prediction of activation barriers of the Diels-Alder reactions turn out to be a meaningful alternative to transition state theory based computation.

  16. Theoretical models of neutron emission in fission

    International Nuclear Information System (INIS)

    Madland, D.G.

    1992-01-01

    A brief survey of theoretical representations of two of the observables in neutron emission in fission is given, namely, the prompt fission neutron spectrum N(E) and the average prompt neutron multiplicity bar v p . Early representations of the two observables are presented and their deficiencies are discussed. This is followed by summaries and examples of recent theoretical models for the calculation of these quantities. Emphasis is placed upon the predictability and accuracy of the new models. In particular, the dependencies of N(E) and bar v p upon the fissioning nucleus and its excitation energy are treated. Recent work in the calculation of the prompt fission neutron spectrum matrix N(E,E n ), where E n is the energy of the neutron inducing fission, is then discussed. Concluding remarks address the current status of our ability to calculate these observables with confidence, the direction of future theoretical efforts, and limititations to current and future calculations. Finally, recommendations are presented as to which model should be used currently and which model should be pursued in future efforts

  17. Hybrid rocket engine, theoretical model and experiment

    Science.gov (United States)

    Chelaru, Teodor-Viorel; Mingireanu, Florin

    2011-06-01

    The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.

  18. QTest: Quantitative Testing of Theories of Binary Choice.

    Science.gov (United States)

    Regenwetter, Michel; Davis-Stober, Clintin P; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William

    2014-01-01

    The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of "Random Cumulative Prospect Theory." A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences.

  19. QTest: Quantitative Testing of Theories of Binary Choice

    Science.gov (United States)

    Regenwetter, Michel; Davis-Stober, Clintin P.; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William

    2014-01-01

    The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of “Random Cumulative Prospect Theory.” A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences. PMID:24999495

  20. Theoretical models for recombination in expanding gas

    International Nuclear Information System (INIS)

    Avron, Y.; Kahane, S.

    1978-09-01

    In laser isotope separation of atomic uranium, one is confronted with the theoretical problem of estimating the concentration of thermally ionized uranium atoms. To investigate this problem theoretical models for recombination in an expanding gas and in the absence of local thermal equilibrium have been constructed. The expansion of the gas is described by soluble models of the hydrodynamic equation, and the recombination by rate equations. General results for the freezing effect for the suitable ranges of the gas parameters are obtained. The impossibility of thermal equilibrium in expanding two-component systems is proven

  1. The structure and dynamics of cities urban data analysis and theoretical modeling

    CERN Document Server

    Barthelemy, Marc

    2016-01-01

    With over half of the world's population now living in urban areas, the ability to model and understand the structure and dynamics of cities is becoming increasingly valuable. Combining new data with tools and concepts from statistical physics and urban economics, this book presents a modern and interdisciplinary perspective on cities and urban systems. Both empirical observations and theoretical approaches are critically reviewed, with particular emphasis placed on derivations of classical models and results, along with analysis of their limits and validity. Key aspects of cities are thoroughly analyzed, including mobility patterns, the impact of multimodality, the coupling between different transportation modes, the evolution of infrastructure networks, spatial and social organisation, and interactions between cities. Drawing upon knowledge and methods from areas of mathematics, physics, economics and geography, the resulting quantitative description of cities will be of interest to all those studying and r...

  2. An overview of quantitative approaches in Gestalt perception.

    Science.gov (United States)

    Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H

    2016-09-01

    Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Bridging the gap between theoretical ecology and real ecosystems: modeling invertebrate community composition in streams.

    Science.gov (United States)

    Schuwirth, Nele; Reichert, Peter

    2013-02-01

    For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.

  4. Clusters of DNA damage induced by ionizing radiation: Formation of short DNA fragments. I. Theoretical modeling

    International Nuclear Information System (INIS)

    Holley, W.R.; Chatterjee, A.

    1996-01-01

    We have developed a general theoretical model for the interaction of ionizing radiation with chromatin. Chromatin is modeled as a 30-nm-diameter solenoidal fiber composed of 20 turns of nucleosomes, 6 nucleosomes per turn. Charged-particle tracks are modeled by partitioning the energy deposition between primary track core, resulting from glancing collisions with 100 eV or less per event, and δ rays due to knock-on collisions involving energy transfers > 100 eV. A Monte Carlo simulation incorporates damages due to the following molecular mechanisms: (1) ionization of water molecules leading to the formation of circ OH, circ H, e aq , etc.; circ OH attack on sugar molecules leading to strand breaks; circ OH attack on bases; direct ionization of the sugar molecules leading to strand breaks; direct ionization of the bases. Our calculations predict significant clustering of damage both locally, over regions up to 40 hp and over regions extending to several kilobase pairs. A characteristic feature of the regional damage predicted by our model is the production of short fragments of DNA associated with multiple nearby strand breaks. Such fragments have subsequently been detected experimentally and are reported in an accompanying paper after exposure to both high- and low-LET radiation. The overall measured yields agree well quantitatively with the theoretical predictions. Our theoretical results predict the existence of a strong peak at about 85 bp, which represents the revolution period about the nucleosome. Other peaks at multiples of about 1,000 bp correspond to the periodicity of the particular solenoid model of chromatin used in these calculations. Theoretical results in combination with experimental data on fragmentation spectra may help determine the consensus or average structure of the chromatin fibers in mammalian DNA. 27 refs., 7 figs

  5. Theoretical Models, Assessment Frameworks and Test Construction.

    Science.gov (United States)

    Chalhoub-Deville, Micheline

    1997-01-01

    Reviews the usefulness of proficiency models influencing second language testing. Findings indicate that several factors contribute to the lack of congruence between models and test construction and make a case for distinguishing between theoretical models. Underscores the significance of an empirical, contextualized and structured approach to the…

  6. Modeling business processes: theoretical and practical aspects

    Directory of Open Access Journals (Sweden)

    V.V. Dubininа

    2015-06-01

    Full Text Available The essence of process-oriented enterprise management has been examined in the article. The content and types of information technology have been analyzed in the article, due to the complexity and differentiation of existing methods, as well as the specificity of language, terminology of the enterprise business processes modeling. The theoretical aspects of business processes modeling have been reviewed and the modern traditional modeling techniques received practical application in the visualization model of retailers activity have been studied in the article. In the process of theoretical analysis of the modeling methods found that UFO-toolkit method that has been developed by Ukrainian scientists due to it systemology integrated opportunities, is the most suitable for structural and object analysis of retailers business processes. It was designed visualized simulation model of the business process "sales" as is" of retailers using a combination UFO-elements with the aim of the further practical formalization and optimization of a given business process.

  7. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...

  8. Guidelines for a graph-theoretic implementation of structural equation modeling

    Science.gov (United States)

    Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William

    2012-01-01

    Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for

  9. An Information Theoretic Analysis of Classification Sorting and Cognition by Ninth Grade Children within a Piagetian Setting.

    Science.gov (United States)

    Dunlop, David Livingston

    The purpose of this study was to use an information theoretic memory model to quantitatively investigate classification sorting and recall behaviors of various groups of students. The model provided theorems for the determination of information theoretic measures from which inferences concerning mental processing were made. The basic procedure…

  10. Accelerator simulation and theoretical modelling of radiation effects (SMoRE)

    CERN Document Server

    2018-01-01

    This publication summarizes the findings and conclusions of the IAEA coordinated research project (CRP) on accelerator simulation and theoretical modelling of radiation effects, aimed at supporting Member States in the development of advanced radiation-resistant structural materials for implementation in innovative nuclear systems. This aim can be achieved through enhancement of both experimental neutron-emulation capabilities of ion accelerators and improvement of the predictive efficiency of theoretical models and computer codes. This dual approach is challenging but necessary, because outputs of accelerator simulation experiments need adequate theoretical interpretation, and theoretical models and codes need high dose experimental data for their verification. Both ion irradiation investigations and computer modelling have been the specific subjects of the CRP, and the results of these studies are presented in this publication which also includes state-ofthe- art reviews of four major aspects of the project...

  11. Clusters of DNA induced by ionizing radiation: formation of short DNA fragments. I. Theoretical modeling

    Science.gov (United States)

    Holley, W. R.; Chatterjee, A.

    1996-01-01

    We have developed a general theoretical model for the interaction of ionizing radiation with chromatin. Chromatin is modeled as a 30-nm-diameter solenoidal fiber comprised of 20 turns of nucleosomes, 6 nucleosomes per turn. Charged-particle tracks are modeled by partitioning the energy deposition between primary track core, resulting from glancing collisions with 100 eV or less per event, and delta rays due to knock-on collisions involving energy transfers >100 eV. A Monte Carlo simulation incorporates damages due to the following molecular mechanisms: (1) ionization of water molecules leading to the formation of OH, H, eaq, etc.; (2) OH attack on sugar molecules leading to strand breaks: (3) OH attack on bases; (4) direct ionization of the sugar molecules leading to strand breaks; (5) direct ionization of the bases. Our calculations predict significant clustering of damage both locally, over regions up to 40 bp and over regions extending to several kilobase pairs. A characteristic feature of the regional damage predicted by our model is the production of short fragments of DNA associated with multiple nearby strand breaks. The shapes of the spectra of DNA fragment lengths depend on the symmetries or approximate symmetries of the chromatin structure. Such fragments have subsequently been detected experimentally and are reported in an accompanying paper (B. Rydberg, Radiat, Res. 145, 200-209, 1996) after exposure to both high- and low-LET radiation. The overall measured yields agree well quantitatively with the theoretical predictions. Our theoretical results predict the existence of a strong peak at about 85 bp, which represents the revolution period about the nucleosome. Other peaks at multiples of about 1,000 bp correspond to the periodicity of the particular solenoid model of chromatin used in these calculations. Theoretical results in combination with experimental data on fragmentation spectra may help determine the consensus or average structure of the

  12. Empathy and child neglect: a theoretical model.

    Science.gov (United States)

    De Paul, Joaquín; Guibert, María

    2008-11-01

    To present an explanatory theory-based model of child neglect. This model does not address neglectful behaviors of parents with mental retardation, alcohol or drug abuse, or severe mental health problems. In this model parental behavior aimed to satisfy a child's need is considered a helping behavior and, as a consequence, child neglect is considered as a specific type of non-helping behavior. The central hypothesis of the theoretical model presented here suggests that neglectful parents cannot develop the helping response set to care for their children because the observation of a child's signal of need does not lead to the experience of emotions that motivate helping or because the parents experience these emotions, but specific cognitions modify the motivation to help. The present theoretical model suggests that different typologies of neglectful parents could be developed based on different reasons that parents might not to experience emotions that motivate helping behaviors. The model can be helpful to promote new empirical studies about the etiology of different groups of neglectful families.

  13. Generalized PSF modeling for optimized quantitation in PET imaging.

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  14. Aspects of quantitative secondary ion mass spectrometry

    International Nuclear Information System (INIS)

    Grauer, R.

    1982-05-01

    Parameters which have an influence on the formation of secondary ions by ion bombardment of a solid matrix are discussed. Quantitative SIMS-analysis with the help of calibration standards necessitates a stringent control of these parameters. This is particularly valid for the oxygen partial pressure which for metal analysis has to be maintained constant also under ultra high vacuum. The performance of the theoretical LTE-model (Local Thermal Equilibrium) using internal standards will be compared with the analysis with the help of external standards. The LTE-model does not satisfy the requirements for quantitative analysis. (Auth.)

  15. Quantitative Verification and Synthesis of Attack-Defence Scenarios

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Nielson, Flemming; Parker, David

    2016-01-01

    analysis of quantitative properties of complex attack-defence scenarios, using an extension of attack-defence trees which models temporal ordering of actions and allows explicit dependencies in the strategies adopted by attackers and defenders. We adopt a game-theoretic approach, translating attack...... which guarantee or optimise some quantitative property, such as the probability of a successful attack, the expected cost incurred, or some multi-objective trade-off between the two. We implement our approach, building upon the PRISM-games model checker, and apply it to a case study of an RFID goods...

  16. A Quantitative Theoretical Framework For Protein-Induced Fluorescence Enhancement-Förster-Type Resonance Energy Transfer (PIFE-FRET).

    Science.gov (United States)

    Lerner, Eitan; Ploetz, Evelyn; Hohlbein, Johannes; Cordes, Thorben; Weiss, Shimon

    2016-07-07

    Single-molecule, protein-induced fluorescence enhancement (PIFE) serves as a molecular ruler at molecular distances inaccessible to other spectroscopic rulers such as Förster-type resonance energy transfer (FRET) or photoinduced electron transfer. In order to provide two simultaneous measurements of two distances on different molecular length scales for the analysis of macromolecular complexes, we and others recently combined measurements of PIFE and FRET (PIFE-FRET) on the single molecule level. PIFE relies on steric hindrance of the fluorophore Cy3, which is covalently attached to a biomolecule of interest, to rotate out of an excited-state trans isomer to the cis isomer through a 90° intermediate. In this work, we provide a theoretical framework that accounts for relevant photophysical and kinetic parameters of PIFE-FRET, show how this framework allows the extraction of the fold-decrease in isomerization mobility from experimental data, and show how these results provide information on changes in the accessible volume of Cy3. The utility of this model is then demonstrated for experimental results on PIFE-FRET measurement of different protein-DNA interactions. The proposed model and extracted parameters could serve as a benchmark to allow quantitative comparison of PIFE effects in different biological systems.

  17. Theoretical Biology and Medical Modelling: ensuring continued growth and future leadership.

    Science.gov (United States)

    Nishiura, Hiroshi; Rietman, Edward A; Wu, Rongling

    2013-07-11

    Theoretical biology encompasses a broad range of biological disciplines ranging from mathematical biology and biomathematics to philosophy of biology. Adopting a broad definition of "biology", Theoretical Biology and Medical Modelling, an open access journal, considers original research studies that focus on theoretical ideas and models associated with developments in biology and medicine.

  18. Expanding Panjabi's stability model to express movement: a theoretical model.

    Science.gov (United States)

    Hoffman, J; Gabel, P

    2013-06-01

    Novel theoretical models of movement have historically inspired the creation of new methods for the application of human movement. The landmark theoretical model of spinal stability by Panjabi in 1992 led to the creation of an exercise approach to spinal stability. This approach however was later challenged, most significantly due to a lack of favourable clinical effect. The concepts explored in this paper address and consider the deficiencies of Panjabi's model then propose an evolution and expansion from a special model of stability to a general one of movement. It is proposed that two body-wide symbiotic elements are present within all movement systems, stability and mobility. The justification for this is derived from the observable clinical environment. It is clinically recognised that these two elements are present and identifiable throughout the body in different joints and muscles, and the neural conduction system. In order to generalise the Panjabi model of stability to include and illustrate movement, a matching parallel mobility system with the same subsystems was conceptually created. In this expanded theoretical model, the new mobility system is placed beside the existing stability system and subsystems. The ability of both stability and mobility systems to work in harmony will subsequently determine the quality of movement. Conversely, malfunction of either system, or their subsystems, will deleteriously affect all other subsystems and consequently overall movement quality. For this reason, in the rehabilitation exercise environment, focus should be placed on the simultaneous involvement of both the stability and mobility systems. It is suggested that the individual's relevant functional harmonious movements should be challenged at the highest possible level without pain or discomfort. It is anticipated that this conceptual expansion of the theoretical model of stability to one with the symbiotic inclusion of mobility, will provide new understandings

  19. Theoretical models for the muon spectrum at sea level

    International Nuclear Information System (INIS)

    Abdel-Monem, M.S.; Benbrook, J.R.; Osborne, A.R.; Sheldon, W.R.

    1975-01-01

    The absolute vertical cosmic ray muon spectrum is investigated theoretically. Models of high energy interactions (namely, Maeda-Cantrell (MC), Constant Energy (CE), Cocconi-Koester-Perkins (CKP) and Scaling Models) are used to calculate the spectrum of cosmic ray muons at sea level. A comparison is made between the measured spectrum and that predicted from each of the four theoretical models. It is concluded that the recently available measured muon differential intensities agree with the scaling model for energies less than 100 GeV and with the CKP model for energies greater than 200 GeV. The measured differential intensities (Abdel-Monem et al.) agree with scaling. (orig.) [de

  20. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  1. Quantitative Modeling of Earth Surface Processes

    Science.gov (United States)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes. More details...

  2. Modelling in Accounting. Theoretical and Practical Dimensions

    Directory of Open Access Journals (Sweden)

    Teresa Szot-Gabryś

    2010-10-01

    Full Text Available Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic measurements areas which have not been hitherto covered by any accounting system (it applies, for example, to small businesses, agricultural farms, human capital, which requires the development of an appropriate theoretical and practical model. The article illustrates the issue of modelling in accounting based on the example of an accounting model developed for small businesses, i.e. economic entities which are not obliged by law to keep accounting records.

  3. Vibrational algorithms for quantitative crystallographic analyses of hydroxyapatite-based biomaterials: I, theoretical foundations.

    Science.gov (United States)

    Pezzotti, Giuseppe; Zhu, Wenliang; Boffelli, Marco; Adachi, Tetsuya; Ichioka, Hiroaki; Yamamoto, Toshiro; Marunaka, Yoshinori; Kanamura, Narisato

    2015-05-01

    The Raman spectroscopic method has quantitatively been applied to the analysis of local crystallographic orientation in both single-crystal hydroxyapatite and human teeth. Raman selection rules for all the vibrational modes of the hexagonal structure were expanded into explicit functions of Euler angles in space and six Raman tensor elements (RTE). A theoretical treatment has also been put forward according to the orientation distribution function (ODF) formalism, which allows one to resolve the statistical orientation patterns of the nm-sized hydroxyapatite crystallite comprised in the Raman microprobe. Close-form solutions could be obtained for the Euler angles and their statistical distributions resolved with respect to the direction of the average texture axis. Polarized Raman spectra from single-crystalline hydroxyapatite and textured polycrystalline (teeth enamel) samples were compared, and a validation of the proposed Raman method could be obtained through confirming the agreement between RTE values obtained from different samples.

  4. The design and testing of a caring teaching model based on the theoretical framework of caring in the Chinese Context: a mixed-method study.

    Science.gov (United States)

    Guo, Yujie; Shen, Jie; Ye, Xuchun; Chen, Huali; Jiang, Anli

    2013-08-01

    This paper aims to report the design and test the effectiveness of an innovative caring teaching model based on the theoretical framework of caring in the Chinese context. Since the 1970's, caring has been a core value in nursing education. In a previous study, a theoretical framework of caring in the Chinese context is explored employing a grounded theory study, considered beneficial for caring education. A caring teaching model was designed theoretically and a one group pre- and post-test quasi-experimental study was administered to test its effectiveness. From Oct, 2009 to Jul, 2010, a cohort of grade-2 undergraduate nursing students (n=64) in a Chinese medical school was recruited to participate in the study. Data were gathered through quantitative and qualitative methods to evaluate the effectiveness of the caring teaching model. The caring teaching model created an esthetic situation and experiential learning style for teaching caring that was integrated within the curricula. Quantitative data from the quasi-experimental study showed that the post-test scores of each item were higher than those on the pre-test (p<0.01). Thematic analysis of 1220 narratives from students' caring journals and reports of participant class observation revealed two main thematic categories, which reflected, from the students' points of view, the development of student caring character and the impact that the caring teaching model had on this regard. The model could be used as an integrated approach to teach caring in nursing curricula. It would also be beneficial for nursing administrators in cultivating caring nurse practitioners. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Critical Quantitative Inquiry in Context

    Science.gov (United States)

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  6. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    Science.gov (United States)

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  7. Theoretical aspects of the optical model

    International Nuclear Information System (INIS)

    Mahaux, C.

    1980-01-01

    We first recall the definition of the optical-model potential for nucleons and the physical interpretation of the main related quantities. We then survey the recent theoretical progress towards a reliable calculation of this potential. The present limitations of the theory and some prospects for future developments are outlined. (author)

  8. Dynamics in Higher Education Politics: A Theoretical Model

    Science.gov (United States)

    Kauko, Jaakko

    2013-01-01

    This article presents a model for analysing dynamics in higher education politics (DHEP). Theoretically the model draws on the conceptual history of political contingency, agenda-setting theories and previous research on higher education dynamics. According to the model, socio-historical complexity can best be analysed along two dimensions: the…

  9. Expectancy-Violation and Information-Theoretic Models of Melodic Complexity

    Directory of Open Access Journals (Sweden)

    Tuomas Eerola

    2016-07-01

    Full Text Available The present study assesses two types of models for melodic complexity: one based on expectancy violations and the other one related to an information-theoretic account of redundancy in music. Seven different datasets spanning artificial sequences, folk and pop songs were used to refine and assess the models. The refinement eliminated unnecessary components from both types of models. The final analysis pitted three variants of the two model types against each other and could explain from 46-74% of the variance in the ratings across the datasets. The most parsimonious models were identified with an information-theoretic criterion. This suggested that the simplified expectancy-violation models were the most efficient for these sets of data. However, the differences between all optimized models were subtle in terms both of performance and simplicity.

  10. K. Sridhar Moorthy's Theoretical Modelling in Marketing - A Review ...

    African Journals Online (AJOL)

    K. Sridhar Moorthy's Theoretical Modelling in Marketing - A Review. ... Modelling has become a visible tool in many disciplines including marketing and several marketing models have ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  11. Hybrid quantum teleportation: A theoretical model

    Energy Technology Data Exchange (ETDEWEB)

    Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; Yoshikawa, Jun-ichi; Yonezawa, Hidehiro; Furusawa, Akira [Department of Applied Physics, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2014-12-04

    Hybrid quantum teleportation – continuous-variable teleportation of qubits – is a promising approach for deterministically teleporting photonic qubits. We propose how to implement it with current technology. Our theoretical model shows that faithful qubit transfer can be achieved for this teleportation by choosing an optimal gain for the teleporter’s classical channel.

  12. Deferred Action: Theoretical model of process architecture design for emergent business processes

    Directory of Open Access Journals (Sweden)

    Patel, N.V.

    2007-01-01

    Full Text Available E-Business modelling and ebusiness systems development assumes fixed company resources, structures, and business processes. Empirical and theoretical evidence suggests that company resources and structures are emergent rather than fixed. Planning business activity in emergent contexts requires flexible ebusiness models based on better management theories and models . This paper builds and proposes a theoretical model of ebusiness systems capable of catering for emergent factors that affect business processes. Drawing on development of theories of the ‘action and design’class the Theory of Deferred Action is invoked as the base theory for the theoretical model. A theoretical model of flexible process architecture is presented by identifying its core components and their relationships, and then illustrated with exemplar flexible process architectures capable of responding to emergent factors. Managerial implications of the model are considered and the model’s generic applicability is discussed.

  13. A new theoretical model for scattering of electrons by molecules. 1

    International Nuclear Information System (INIS)

    Peixoto, E.M.A.; Mu-tao, L.; Nogueira, J.C.

    1975-01-01

    A new theoretical model for electron-molecule scattering is suggested. The e-H 2 scattering is studied and the superiority of the new model over the commonly used Independent Atom Model (IAM) is demonstrated. Comparing theoretical and experimental data for 40keV electrons scattered by H 2 utilizing the new model, its validity is proved, while Partial Wave and First Born calculations, employing the Independent Atom Model, strongly deviated from the experiment [pt

  14. Lesion detection and quantitation of positron emission mammography

    International Nuclear Information System (INIS)

    Qi, Jinyi; Huesman, Ronald H.

    2001-01-01

    A Positron Emission Mammography (PEM) scanner dedicated to breast imaging is being developed at our laboratory. We have developed a list mode likelihood reconstruction algorithm for this scanner. Here we theoretically study the lesion detection and quantitation. The lesion detectability is studied theoretically using computer observers. We found that for the zero-order quadratic prior, the region of interest observer can achieve the performance of the prewhitening observer with a properly selected smoothing parameter. We also study the lesion quantitation using the test statistic of the region of interest observer. The theoretical expressions for the bias, variance, and ensemble mean squared error of the quantitation are derived. Computer simulations show that the theoretical predictions are in good agreement with the Monte Carlo results for both lesion detection and quantitation

  15. Quantitative modelling of the biomechanics of the avian syrinx

    DEFF Research Database (Denmark)

    Elemans, Coen P. H.; Larsen, Ole Næsbye; Hoffmann, Marc R.

    2003-01-01

    We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts...

  16. N-barN interaction theoretical models

    International Nuclear Information System (INIS)

    Loiseau, B.

    1991-12-01

    In the framework of antinucleon-nucleon interaction theoretical models, our present understanding on the N-barN interaction is discussed, either from quark- or/and meson- and baryon-degrees of freedom, by considering the N-barN annihilation into mesons and the N-barN elastic and charge-exchange scattering. (author) 52 refs., 11 figs., 2 tabs

  17. Theoretical models for development competence of health protection and promotion

    Directory of Open Access Journals (Sweden)

    Cesnaviciene J.

    2014-01-01

    Full Text Available The competence of health protection and promotion are mentioned in various legislative documents that regulate areas of education and health policy. The researches on health conditions of Lithuania Country's population disclosed the deteriorating health status of the society, even of the children. It has also been found that the focus on health education is not adequate. The number of National and International health programmes have been realized and educational methodological tools prepared in Lithuania, however the insufficient attention to the health promotion models is been noticed. The objectiveof this article is to discuss the theoretical models used in health education field. The questions to be answered: what theoretical models are used in order to development competence of health protection and promotion? Who does employ particular models? What are the advantages of various models? What conceptions unite and characterize theoretical models? The analysis of scientific literature revealed the number of diverse health promotion model; however none of them is dominant. Some of the models focus on intrapersonal, others on interpersonal or community level but in general they can be distinguished as cognitive – behavioural models which are characterized by three main conceptions: 1 the healthy living is determined by the perceived health related knowledge: what is known and understood would influence the behaviour; 2 the knowledge in healthy living field is essential but insufficient condition for behaviour change; 3 the great influence to healthy living life style is done by perception, motivation, skills and habits as well as social environment. These are the components that are typical to all theoretical models and that reflect the hole of the conditions influencing healthy living.

  18. Quantitative model for the generic 3D shape of ICMEs at 1 AU

    Science.gov (United States)

    Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.

    2016-10-01

    Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.

  19. A theoretical model of multielectrode DBR lasers

    DEFF Research Database (Denmark)

    Pan, Xing; Olesen, Henning; Tromborg, Bjarne

    1988-01-01

    A theoretical model for two- and three-section tunable distributed Bragg reflector (DBR) lasers is presented. The static tuning properties are studied in terms of threshold current, linewidth, oscillation frequency, and output power. Regions of continuous tuning for three-section DBR lasers...

  20. A theoretical model of semi-elliptic surface crack growth

    Directory of Open Access Journals (Sweden)

    Shi Kaikai

    2014-06-01

    Full Text Available A theoretical model of semi-elliptic surface crack growth based on the low cycle strain damage accumulation near the crack tip along the cracking direction and the Newman–Raju formula is developed. The crack is regarded as a sharp notch with a small curvature radius and the process zone is assumed to be the size of cyclic plastic zone. The modified Hutchinson, Rice and Rosengren (HRR formulations are used in the presented study. Assuming that the shape of surface crack front is controlled by two critical points: the deepest point and the surface point. The theoretical model is applied to semi-elliptic surface cracked Al 7075-T6 alloy plate under cyclic loading, and five different initial crack shapes are discussed in present study. Good agreement between experimental and theoretical results is obtained.

  1. A review of game-theoretic models of road user behaviour.

    Science.gov (United States)

    Elvik, Rune

    2014-01-01

    This paper reviews game-theoretic models that have been developed to explain road user behaviour in situations where road users interact with each other. The paper includes the following game-theoretic models: 1.A general model of the interaction between road users and their possible reaction to measures improving safety (behavioural adaptation).2.Choice of vehicle size as a Prisoners’ dilemma game.3.Speed choice as a co-ordination game.4.Speed compliance as a game between drivers and the police.5.Merging into traffic from an acceleration lane as a mixed-strategy game.6.Choice of level of attention in following situations as an evolutionary game.7.Choice of departure time to avoid congestion as variant of a Prisoners’ dilemma game.8.Interaction between cyclists crossing the road and car drivers.9.Dipping headlights at night well ahead of the point when glare becomes noticeable.10.Choice of evasive action in a situation when cars are on collision course. The models reviewed are different in many respects, but a common feature of the models is that they can explain how informal norms of behaviour can develop among road users and be sustained even if these informal norms violate the formal regulations of the traffic code. Game-theoretic models are not applicable to every conceivable interaction between road users or to situations in which road users choose behaviour without interacting with other road users. Nevertheless, it is likely that game-theoretic models can be applied more widely than they have been until now. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  3. Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Xuefeng Yan

    2013-01-01

    Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.

  4. Quantitative Theoretical and Conceptual Framework Use in Agricultural Education Research

    Science.gov (United States)

    Kitchel, Tracy; Ball, Anna L.

    2014-01-01

    The purpose of this philosophical paper was to articulate the disciplinary tenets for consideration when using theory in agricultural education quantitative research. The paper clarified terminology around the concept of theory in social sciences and introduced inaccuracies of theory use in agricultural education quantitative research. Finally,…

  5. Modeling with Young Students--Quantitative and Qualitative.

    Science.gov (United States)

    Bliss, Joan; Ogborn, Jon; Boohan, Richard; Brosnan, Tim; Mellar, Harvey; Sakonidis, Babis

    1999-01-01

    A project created tasks and tools to investigate quality and nature of 11- to 14-year-old pupils' reasoning with quantitative and qualitative computer-based modeling tools. Tasks and tools were used in two innovative modes of learning: expressive, where pupils created their own models, and exploratory, where pupils investigated an expert's model.…

  6. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  7. δ-Cut Decision-Theoretic Rough Set Approach: Model and Attribute Reductions

    Directory of Open Access Journals (Sweden)

    Hengrong Ju

    2014-01-01

    Full Text Available Decision-theoretic rough set is a quite useful rough set by introducing the decision cost into probabilistic approximations of the target. However, Yao’s decision-theoretic rough set is based on the classical indiscernibility relation; such a relation may be too strict in many applications. To solve this problem, a δ-cut decision-theoretic rough set is proposed, which is based on the δ-cut quantitative indiscernibility relation. Furthermore, with respect to criterions of decision-monotonicity and cost decreasing, two different algorithms are designed to compute reducts, respectively. The comparisons between these two algorithms show us the following: (1 with respect to the original data set, the reducts based on decision-monotonicity criterion can generate more rules supported by the lower approximation region and less rules supported by the boundary region, and it follows that the uncertainty which comes from boundary region can be decreased; (2 with respect to the reducts based on decision-monotonicity criterion, the reducts based on cost minimum criterion can obtain the lowest decision costs and the largest approximation qualities. This study suggests potential application areas and new research trends concerning rough set theory.

  8. K. Sridhar Moorthy's Theoretical Modelling in Marketing - A Review

    African Journals Online (AJOL)

    Toshiba

    experimental design for theoretical modelling of sales force compensation is vivid and ... different from the concept of a model in decision support systems and behavioural .... ―refers to the fact that people may not optimize.‖ This, of course, is.

  9. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  10. Some Model Theoretic Remarks on Bass Modules

    Directory of Open Access Journals (Sweden)

    E. Momtahan

    2011-09-01

    Full Text Available We study Bass modules, Bass rings, and related concepts from a model theoretic point of view. We observe that the class of Bass modules (over a fixed ring is not stable under elementary equivalence. We observe that under which conditions the class of Bass rings are stable under elementary equivalence.

  11. Testing a theoretical model of clinical nurses' intent to stay.

    Science.gov (United States)

    Cowden, Tracy L; Cummings, Greta G

    2015-01-01

    Published theoretical models of nurses' intent to stay (ITS) report inconsistent outcomes, and not all hypothesized models have been adequately tested. Research has focused on cognitive rather than emotional determinants of nurses' ITS. The aim of this study was to empirically verify a complex theoretical model of nurses' ITS that includes both affective and cognitive determinants and to explore the influence of relational leadership on staff nurses' ITS. The study was a correlational, mixed-method, nonexperimental design. A subsample of the Quality Work Environment Study survey data 2009 (n = 415 nurses) was used to test our theoretical model of clinical nurses' ITS as a structural equation model. The model explained 63% of variance in ITS. Organizational commitment, empowerment, and desire to stay were the model concepts with the strongest effects on nurses' ITS. Leadership practices indirectly influenced ITS. How nurses evaluate and respond to their work environment is both an emotional and rational process. Health care organizations need to be cognizant of the influence that nurses' feelings and views of their work setting have on their intention decisions and integrate that knowledge into the development of retention strategies. Leadership practices play an important role in staff nurses' perceptions of the workplace. Identifying the mechanisms by which leadership influences staff nurses' intentions to stay presents additional focus areas for developing retention strategies.

  12. A field theoretic model for static friction

    OpenAIRE

    Mahyaeh, I.; Rouhani, S.

    2013-01-01

    We present a field theoretic model for friction, where the friction coefficient between two surfaces may be calculated based on elastic properties of the surfaces. We assume that the geometry of contact surface is not unusual. We verify Amonton's laws to hold that friction force is proportional to the normal load.This model gives the opportunity to calculate the static coefficient of friction for a few cases, and show that it is in agreement with observed values. Furthermore we show that the ...

  13. Theoretical Relevance of Neuropsychological Data for Connectionist Modelling

    Directory of Open Access Journals (Sweden)

    Mauricio Iza

    2011-05-01

    Full Text Available The symbolic information-processing paradigm in cognitive psychology has met a growing challenge from neural network models over the past two decades. While neuropsychological
    evidence has been of great utility to theories concerned with information processing, the real question is, whether the less rigid connectionist models provide valid, or enough, information
    concerning complex cognitive structures. In this work, we will discuss the theoretical implications that neuropsychological data posits for modelling cognitive systems.

  14. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  15. Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations

    Science.gov (United States)

    Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa

    2017-05-01

    We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.

  16. Theoretical model simulations for the global Thermospheric Mapping Study (TMS) periods

    Science.gov (United States)

    Rees, D.; Fuller-Rowell, T. J.

    Theoretical and semiempirical models of the solar UV/EUV and of the geomagnetic driving forces affecting the terrestrial mesosphere and thermosphere have been used to generate a series of representative numerical time-dependent and global models of the thermosphere, for the range of solar and geoamgnetic activity levels which occurred during the three Thermospheric Mapping Study periods. The simulations obtained from these numerical models are compared with observations, and with the results of semiempirical models of the thermosphere. The theoretical models provide a record of the magnitude of the major driving forces which affected the thermosphere during the study periods, and a baseline against which the actual observed structure and dynamics can be compared.

  17. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia; Harmandaris, Vagelis; Katsoulakis, Markos A.; Plechac, Petr

    2015-01-01

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics

  18. 1st International Congress on Actuarial Science and Quantitative Finance

    CERN Document Server

    Garrido, José; Hernández-Hernández, Daniel; ICASQF

    2015-01-01

    Featuring contributions from industry and academia, this volume includes chapters covering a diverse range of theoretical and empirical aspects of actuarial science and quantitative finance, including portfolio management, derivative valuation, risk theory and the economics of insurance. Developed from the First International Congress on Actuarial Science and Quantitative Finance, held at the Universidad Nacional de Colombia in Bogotá in June 2014, this volume highlights different approaches to issues arising from industries in the Andean and Carribean regions. Contributions address topics such as Reverse mortgage schemes and urban dynamics, modeling spot price dynamics in the electricity market, and optimizing calibration and pricing with SABR models.

  19. Theoretical Approaches in Evolutionary Ecology: Environmental Feedback as a Unifying Perspective.

    Science.gov (United States)

    Lion, Sébastien

    2018-01-01

    Evolutionary biology and ecology have a strong theoretical underpinning, and this has fostered a variety of modeling approaches. A major challenge of this theoretical work has been to unravel the tangled feedback loop between ecology and evolution. This has prompted the development of two main classes of models. While quantitative genetics models jointly consider the ecological and evolutionary dynamics of a focal population, a separation of timescales between ecology and evolution is assumed by evolutionary game theory, adaptive dynamics, and inclusive fitness theory. As a result, theoretical evolutionary ecology tends to be divided among different schools of thought, with different toolboxes and motivations. My aim in this synthesis is to highlight the connections between these different approaches and clarify the current state of theory in evolutionary ecology. Central to this approach is to make explicit the dependence on environmental dynamics of the population and evolutionary dynamics, thereby materializing the eco-evolutionary feedback loop. This perspective sheds light on the interplay between environmental feedback and the timescales of ecological and evolutionary processes. I conclude by discussing some potential extensions and challenges to our current theoretical understanding of eco-evolutionary dynamics.

  20. Testing process predictions of models of risky choice: a quantitative model comparison approach

    Science.gov (United States)

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  1. Testing Process Predictions of Models of Risky Choice: A Quantitative Model Comparison Approach

    Directory of Open Access Journals (Sweden)

    Thorsten ePachur

    2013-09-01

    Full Text Available This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or nonlinear functions thereof and the separate evaluation of risky options (expectation models. Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models. We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter, Gigerenzer, & Hertwig, 2006, and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up and direction of search (i.e., gamble-wise vs. reason-wise. In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly; acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988 called similarity. In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.

  2. Understanding employee motivation and organizational performance: Arguments for a set-theoretic approach

    Directory of Open Access Journals (Sweden)

    Michael T. Lee

    2016-09-01

    Full Text Available Empirical evidence demonstrates that motivated employees mean better organizational performance. The objective of this conceptual paper is to articulate the progress that has been made in understanding employee motivation and organizational performance, and to suggest how the theory concerning employee motivation and organizational performance may be advanced. We acknowledge the existing limitations of theory development and suggest an alternative research approach. Current motivation theory development is based on conventional quantitative analysis (e.g., multiple regression analysis, structural equation modeling. Since researchers are interested in context and understanding of this social phenomena holistically, they think in terms of combinations and configurations of a set of pertinent variables. We suggest that researchers take a set-theoretic approach to complement existing conventional quantitative analysis. To advance current thinking, we propose a set-theoretic approach to leverage employee motivation for organizational performance.

  3. Desublimation process: verification and applications of a theoretical model

    International Nuclear Information System (INIS)

    Eby, R.S.

    1979-01-01

    A theoretical model simulating the simultaneous heat and mass transfer which takes place during the desublimation of a gas to a solid is presented. Desublimer column loading profiles to experimentally verify the model were obtained using a gamma scintillation technique. The data indicate that, if the physical parameters of the desublimed frost material are known, the model can accurately predict the desublimation phenomenon. The usefulness of the model in different engineering applications is also addressed

  4. A Review of Quantitative Situation Assessment Models for Nuclear Power Plant Operators

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Situation assessment is the process of developing situation awareness and situation awareness is defined as 'the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future.' Situation awareness is an important element influencing human actions because human decision making is based on the result of situation assessment or situation awareness. There are many models for situation awareness and those models can be categorized into qualitative or quantitative. As the effects of some input factors on situation awareness can be investigated through the quantitative models, the quantitative models are more useful for the design of operator interfaces, automation strategies, training program, and so on, than the qualitative models. This study presents the review of two quantitative models of situation assessment (SA) for nuclear power plant operators

  5. A theoretical model of job retention for home health care nurses.

    Science.gov (United States)

    Ellenbecker, Carol Hall

    2004-08-01

    Predicted severe nursing shortages and an increasing demand for home health care services have made the retention of experienced, qualified nursing staff a priority for health care organizations. The purpose of this paper is to describe a theoretical model of job retention for home health care nurses. The theoretical model is an integration of the findings of empirical research related to intent to stay and retention, components of Neal's theory of home health care nursing practice and findings from earlier work to develop an instrument to measure home health care nurses' job satisfaction. The theoretical model identifies antecedents to job satisfaction of home health care nurses. The antecedents are intrinsic and extrinsic job characteristics. The model also proposes that job satisfaction is directly related to retention and indirectly related to retention though intent to stay. Individual nurse characteristics are indirectly related to retention through intent to stay. The individual characteristic of tenure is indirectly related to retention through autonomy, as an intrinsic characteristic of job satisfaction, and intent to stay. The proposed model can be used to guide research that explores gaps in knowledge about intent to stay and retention among home health care nurses.

  6. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, James A.

    1988-01-01

    A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.

  7. Simple theoretical models for composite rotor blades

    Science.gov (United States)

    Valisetty, R. R.; Rehfield, L. W.

    1984-01-01

    The development of theoretical rotor blade structural models for designs based upon composite construction is discussed. Care was exercised to include a member of nonclassical effects that previous experience indicated would be potentially important to account for. A model, representative of the size of a main rotor blade, is analyzed in order to assess the importance of various influences. The findings of this model study suggest that for the slenderness and closed cell construction considered, the refinements are of little importance and a classical type theory is adequate. The potential of elastic tailoring is dramatically demonstrated, so the generality of arbitrary ply layup in the cell wall is needed to exploit this opportunity.

  8. Surface physics theoretical models and experimental methods

    CERN Document Server

    Mamonova, Marina V; Prudnikova, I A

    2016-01-01

    The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...

  9. Theoretical modeling of critical temperature increase in metamaterial superconductors

    Science.gov (United States)

    Smolyaninov, Igor; Smolyaninova, Vera

    Recent experiments have demonstrated that the metamaterial approach is capable of drastic increase of the critical temperature Tc of epsilon near zero (ENZ) metamaterial superconductors. For example, tripling of the critical temperature has been observed in Al-Al2O3 ENZ core-shell metamaterials. Here, we perform theoretical modelling of Tc increase in metamaterial superconductors based on the Maxwell-Garnett approximation of their dielectric response function. Good agreement is demonstrated between theoretical modelling and experimental results in both aluminum and tin-based metamaterials. Taking advantage of the demonstrated success of this model, the critical temperature of hypothetic niobium, MgB2 and H2S-based metamaterial superconductors is evaluated. The MgB2-based metamaterial superconductors are projected to reach the liquid nitrogen temperature range. In the case of an H2S-based metamaterial Tc appears to reach 250 K. This work was supported in part by NSF Grant DMR-1104676 and the School of Emerging Technologies at Towson University.

  10. How to Be Both Rich and Happy: Combining Quantitative and Qualitative Strategic Reasoning about Multi-Player Games

    DEFF Research Database (Denmark)

    Bulling, Nils; Goranko, Valentin

    2013-01-01

    We propose a logical framework combining a game-theoretic study of abilities of agents to achieve quantitative objectives in multi-player games by optimizing payoffs or preferences on outcomes with a logical analysis of the abilities of players for achieving qualitative objectives of players, i.......e., reaching or maintaining game states with desired properties. We enrich concurrent game models with payoffs for the normal form games associated with the states of the model and propose a quantitative extension of the logic ATL* enabling the combination of quantitative and qualitative reasoning....

  11. Quantitative system validation in model driven design

    DEFF Research Database (Denmark)

    Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois

    2010-01-01

    The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made, focus...

  12. Theoretical Model of Development of Information Competence among Students Enrolled in Elective Courses

    Science.gov (United States)

    Zhumasheva, Anara; Zhumabaeva, Zaida; Sakenov, Janat; Vedilina, Yelena; Zhaxylykova, Nuriya; Sekenova, Balkumis

    2016-01-01

    The current study focuses on the research topic of creating a theoretical model of development of information competence among students enrolled in elective courses. In order to examine specific features of the theoretical model of development of information competence among students enrolled in elective courses, we performed an analysis of…

  13. A beginner's guide to writing the nursing conceptual model-based theoretical rationale.

    Science.gov (United States)

    Gigliotti, Eileen; Manister, Nancy N

    2012-10-01

    Writing the theoretical rationale for a study can be a daunting prospect for novice researchers. Nursing's conceptual models provide excellent frameworks for placement of study variables, but moving from the very abstract concepts of the nursing model to the less abstract concepts of the study variables is difficult. Similar to the five-paragraph essay used by writing teachers to assist beginning writers to construct a logical thesis, the authors of this column present guidelines that beginners can follow to construct their theoretical rationale. This guide can be used with any nursing conceptual model but Neuman's model was chosen here as the exemplar.

  14. Theoretical Analysis and Design of Ultrathin Broadband Optically Transparent Microwave Metamaterial Absorbers

    Science.gov (United States)

    Deng, Ruixiang; Li, Meiling; Muneer, Badar; Zhu, Qi; Shi, Zaiying; Song, Lixin; Zhang, Tao

    2018-01-01

    Optically Transparent Microwave Metamaterial Absorber (OTMMA) is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts. PMID:29324686

  15. Theoretical Analysis and Design of Ultrathin Broadband Optically Transparent Microwave Metamaterial Absorbers

    Directory of Open Access Journals (Sweden)

    Ruixiang Deng

    2018-01-01

    Full Text Available Optically Transparent Microwave Metamaterial Absorber (OTMMA is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts.

  16. Quantitative interface models for simulating microstructure evolution

    International Nuclear Information System (INIS)

    Zhu, J.Z.; Wang, T.; Zhou, S.H.; Liu, Z.K.; Chen, L.Q.

    2004-01-01

    To quantitatively simulate microstructural evolution in real systems, we investigated three different interface models: a sharp-interface model implemented by the software DICTRA and two diffuse-interface models which use either physical order parameters or artificial order parameters. A particular example is considered, the diffusion-controlled growth of a γ ' precipitate in a supersaturated γ matrix in Ni-Al binary alloys. All three models use the thermodynamic and kinetic parameters from the same databases. The temporal evolution profiles of composition from different models are shown to agree with each other. The focus is on examining the advantages and disadvantages of each model as applied to microstructure evolution in alloys

  17. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  18. A new theoretical approach to analyze complex processes in cytoskeleton proteins.

    Science.gov (United States)

    Li, Xin; Kolomeisky, Anatoly B

    2014-03-20

    Cytoskeleton proteins are filament structures that support a large number of important biological processes. These dynamic biopolymers exist in nonequilibrium conditions stimulated by hydrolysis chemical reactions in their monomers. Current theoretical methods provide a comprehensive picture of biochemical and biophysical processes in cytoskeleton proteins. However, the description is only qualitative under biologically relevant conditions because utilized theoretical mean-field models neglect correlations. We develop a new theoretical method to describe dynamic processes in cytoskeleton proteins that takes into account spatial correlations in the chemical composition of these biopolymers. Our approach is based on analysis of probabilities of different clusters of subunits. It allows us to obtain exact analytical expressions for a variety of dynamic properties of cytoskeleton filaments. By comparing theoretical predictions with Monte Carlo computer simulations, it is shown that our method provides a fully quantitative description of complex dynamic phenomena in cytoskeleton proteins under all conditions.

  19. Healing from Childhood Sexual Abuse: A Theoretical Model

    Science.gov (United States)

    Draucker, Claire Burke; Martsolf, Donna S.; Roller, Cynthia; Knapik, Gregory; Ross, Ratchneewan; Stidham, Andrea Warner

    2011-01-01

    Childhood sexual abuse is a prevalent social and health care problem. The processes by which individuals heal from childhood sexual abuse are not clearly understood. The purpose of this study was to develop a theoretical model to describe how adults heal from childhood sexual abuse. Community recruitment for an ongoing broader project on sexual…

  20. Theoretical Hill-type muscle and stability: numerical model and application.

    Science.gov (United States)

    Schmitt, S; Günther, M; Rupp, T; Bayer, A; Häufle, D

    2013-01-01

    The construction of artificial muscles is one of the most challenging developments in today's biomedical science. The application of artificial muscles is focused both on the construction of orthotics and prosthetics for rehabilitation and prevention purposes and on building humanoid walking machines for robotics research. Research in biomechanics tries to explain the functioning and design of real biological muscles and therefore lays the fundament for the development of functional artificial muscles. Recently, the hyperbolic Hill-type force-velocity relation was derived from simple mechanical components. In this contribution, this theoretical yet biomechanical model is transferred to a numerical model and applied for presenting a proof-of-concept of a functional artificial muscle. Additionally, this validated theoretical model is used to determine force-velocity relations of different animal species that are based on the literature data from biological experiments. Moreover, it is shown that an antagonistic muscle actuator can help in stabilising a single inverted pendulum model in favour of a control approach using a linear torque generator.

  1. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    Energy Technology Data Exchange (ETDEWEB)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  2. Quantitative Models and Analysis for Reactive Systems

    DEFF Research Database (Denmark)

    Thrane, Claus

    phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided...

  3. Organizational Resilience: The Theoretical Model and Research Implication

    Directory of Open Access Journals (Sweden)

    Xiao Lei

    2017-01-01

    Full Text Available Organizations are all subject to a diverse and ever changing and uncertain environment. Under this situation organizations should develop a capability which can resist the emergency and recover from the disruption. Base on lot of literature, the paper provides the main concept of organizational resilience; construct the primary theoretical model and some implications for management.

  4. Towards a theoretical model on medicines as a health need.

    Science.gov (United States)

    Vargas-Peláez, Claudia Marcela; Soares, Luciano; Rover, Marina Raijche Mattozo; Blatt, Carine Raquel; Mantel-Teeuwisse, Aukje; Rossi Buenaventura, Francisco Augusto; Restrepo, Luis Guillermo; Latorre, María Cristina; López, José Julián; Bürgin, María Teresa; Silva, Consuelo; Leite, Silvana Nair; Mareni Rocha, Farias

    2017-04-01

    Medicines are considered one of the main tools of western medicine to resolve health problems. Currently, medicines represent an important share of the countries' healthcare budget. In the Latin America region, access to essential medicines is still a challenge, although countries have established some measures in the last years in order to guarantee equitable access to medicines. A theoretical model is proposed for analysing the social, political, and economic factors that modulate the role of medicines as a health need and their influence on the accessibility and access to medicines. The model was built based on a narrative review about health needs, and followed the conceptual modelling methodology for theory-building. The theoretical model considers elements (stakeholders, policies) that modulate the perception towards medicines as a health need from two perspectives - health and market - at three levels: international, national and local levels. The perception towards medicines as a health need is described according to Bradshaw's categories: felt need, normative need, comparative need and expressed need. When those different categories applied to medicines coincide, the patients get access to the medicines they perceive as a need, but when the categories do not coincide, barriers to access to medicines are created. Our theoretical model, which holds a broader view about the access to medicines, emphasises how power structures, interests, interdependencies, values and principles of the stakeholders could influence the perception towards medicines as a health need and the access to medicines in Latin American countries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Quantitative model analysis with diverse biological data: applications in developmental pattern formation.

    Science.gov (United States)

    Pargett, Michael; Umulis, David M

    2013-07-15

    Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. A Game Theoretic Model of Thermonuclear Cyberwar

    Energy Technology Data Exchange (ETDEWEB)

    Soper, Braden C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-23

    In this paper we propose a formal game theoretic model of thermonuclear cyberwar based on ideas found in [1] and [2]. Our intention is that such a game will act as a first step toward building more complete formal models of Cross-Domain Deterrence (CDD). We believe the proposed thermonuclear cyberwar game is an ideal place to start on such an endeavor because the game can be fashioned in a way that is closely related to the classical models of nuclear deterrence [4–6], but with obvious modifications that will help to elucidate the complexities introduced by a second domain. We start with the classical bimatrix nuclear deterrence game based on the game of chicken, but introduce uncertainty via a left-of-launch cyber capability that one or both players may possess.

  7. A theoretical model for predicting neutron fluxes for cyclic Neutron ...

    African Journals Online (AJOL)

    A theoretical model has been developed for prediction of thermal neutron fluxes required for cyclic irradiations of a sample to obtain the same activity previously used for the detection of any radionuclide of interest. The model is suitable for radiotracer production or for long-lived neutron activation products where the ...

  8. Quantitative sputter profiling at surfaces and interfaces

    International Nuclear Information System (INIS)

    Kirschner, J.; Etzkorn, H.W.

    1981-01-01

    The key problem in quantitative sputter profiling, that of a sliding depth scale has been solved by combined Auger/X-ray microanalysis. By means of this technique and for the model system Ge/Si (amorphous) the following questions are treated quantitatively: shape of the sputter profiles when sputtering through an interface and origin of their asymmetry; precise location of the interface plane on the depth profile; broadening effects due to limited depth of information and their correction; origin and amount of bombardment induced broadening for different primary ions and energies; depth dependence of the broadening, and basic limits to depth resolution. Comparisons are made to recent theoretical calculations based on recoil mixing in the collision cascade and very good agreement is found

  9. Interpretation of protein quantitation using the Bradford assay: comparison with two calculation models.

    Science.gov (United States)

    Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung

    2013-03-01

    The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. A theoretical model for the control of an enforcement system on emissions of pollutants

    International Nuclear Information System (INIS)

    Villegas, Clara Ines

    2005-01-01

    A theoretical proposal for the development of an enforcement strategy is presented on this paper. The proposal guaranties full compliance of an emission charge system with self-report presence. The proposed models are static, and mostly based on those proposed by Strandlund and Chavez (2000) for a transferable permits system with self -report presence. Theoretical models were developed for three possible violations: self-report violation, maximum emission limits violation and payment violation. Based in theoretical results, a simulation was implemented with hypothetical data: 20 regulated firms with different marginal abatement cost functions. The variation in charge amount, Monitory costs, abatement cost, self-report value and total cost are analyzed, with each of the theoretical models under different scenarios. Our results show that the behavior of the different variables remains unchanged under the three static models, and that the only variations occur inside the scenarios. Our results can serve as a tool for the formulation and design of taxing systems

  11. Theoretical models for supercritical fluid extraction.

    Science.gov (United States)

    Huang, Zhen; Shi, Xiao-Han; Jiang, Wei-Juan

    2012-08-10

    For the proper design of supercritical fluid extraction processes, it is essential to have a sound knowledge of the mass transfer mechanism of the extraction process and the appropriate mathematical representation. In this paper, the advances and applications of kinetic models for describing supercritical fluid extraction from various solid matrices have been presented. The theoretical models overviewed here include the hot ball diffusion, broken and intact cell, shrinking core and some relatively simple models. Mathematical representations of these models have been in detail interpreted as well as their assumptions, parameter identifications and application examples. Extraction process of the analyte solute from the solid matrix by means of supercritical fluid includes the dissolution of the analyte from the solid, the analyte diffusion in the matrix and its transport to the bulk supercritical fluid. Mechanisms involved in a mass transfer model are discussed in terms of external mass transfer resistance, internal mass transfer resistance, solute-solid interactions and axial dispersion. The correlations of the external mass transfer coefficient and axial dispersion coefficient with certain dimensionless numbers are also discussed. Among these models, the broken and intact cell model seems to be the most relevant mathematical model as it is able to provide realistic description of the plant material structure for better understanding the mass-transfer kinetics and thus it has been widely employed for modeling supercritical fluid extraction of natural matters. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Validation of theoretical models through measured pavement response

    DEFF Research Database (Denmark)

    Ullidtz, Per

    1999-01-01

    mechanics was quite different from the measured stress, the peak theoretical value being only half of the measured value.On an instrumented pavement structure in the Danish Road Testing Machine, deflections were measured at the surface of the pavement under FWD loading. Different analytical models were...... then used to derive the elastic parameters of the pavement layeres, that would produce deflections matching the measured deflections. Stresses and strains were then calculated at the position of the gauges and compared to the measured values. It was found that all analytical models would predict the tensile...

  13. Evaluation of recent quantitative magnetospheric magnetic field models

    International Nuclear Information System (INIS)

    Walker, R.J.

    1976-01-01

    Recent quantitative magnetospheric field models contain many features not found in earlier models. Magnetopause models which include the effects of the dipole tilt were presented. More realistic models of the tail field include tail currents which close on the magnetopause, cross-tail currents of finite thickness, and cross-tail current models which model the position of the neutral sheet as a function of tilt. Finally, models have attempted to calculate the field of currents distributed in the inner magnetosphere. As the purpose of a magnetospheric model is to provide a mathematical description of the field that reasonably reproduces the observed magnetospheric field, several recent models were compared with the observed ΔB(B/sub observed/--B/sub main field/) contours. Models containing only contributions from magnetopause and tail current systems are able to reproduce the observed quiet time field only in an extremely qualitative way. The best quantitative agreement between models and observations occurs when currents distributed in the inner magnetosphere are added to the magnetopause and tail current systems. However, the distributed current models are valid only for zero tilt. Even the models which reproduce the average observed field reasonably well may not give physically reasonable field gradients. Three of the models evaluated contain regions in the near tail in which the field gradient reverses direction. One region in which all the models fall short is that around the polar cusp, though most can be used to calculate the position of the last closed field line reasonably well

  14. How to Be Both Rich and Happy: Combining Quantitative and Qualitative Strategic Reasoning about Multi-Player Games (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Nils Bulling

    2013-03-01

    Full Text Available We propose a logical framework combining a game-theoretic study of abilities of agents to achieve quantitative objectives in multi-player games by optimizing payoffs or preferences on outcomes with a logical analysis of the abilities of players for achieving qualitative objectives of players, i.e., reaching or maintaining game states with desired properties. We enrich concurrent game models with payoffs for the normal form games associated with the states of the model and propose a quantitative extension of the logic ATL* enabling the combination of quantitative and qualitative reasoning.

  15. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    DEFF Research Database (Denmark)

    ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting pa...

  16. A theoretical model of water and trade

    Science.gov (United States)

    Dang, Qian; Konar, Megan; Reimer, Jeffrey J.; Di Baldassarre, Giuliano; Lin, Xiaowen; Zeng, Ruijie

    2016-03-01

    Water is an essential input for agricultural production. Agriculture, in turn, is globalized through the trade of agricultural commodities. In this paper, we develop a theoretical model that emphasizes four tradeoffs involving water-use decision-making that are important yet not always considered in a consistent framework. One tradeoff focuses on competition for water among different economic sectors. A second tradeoff examines the possibility that certain types of agricultural investments can offset water use. A third tradeoff explores the possibility that the rest of the world can be a source of supply or demand for a country's water-using commodities. The fourth tradeoff concerns how variability in water supplies influences farmer decision-making. We show conditions under which trade liberalization affect water use. Two policy scenarios to reduce water use are evaluated. First, we derive a target tax that reduces water use without offsetting the gains from trade liberalization, although important tradeoffs exist between economic performance and resource use. Second, we show how subsidization of water-saving technologies can allow producers to use less water without reducing agricultural production, making such subsidization an indirect means of influencing water use decision-making. Finally, we outline conditions under which riskiness of water availability affects water use. These theoretical model results generate hypotheses that can be tested empirically in future work.

  17. Category-theoretic models of algebraic computer systems

    Science.gov (United States)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  18. A Theoretically Consistent Framework for Modelling Lagrangian Particle Deposition in Plant Canopies

    Science.gov (United States)

    Bailey, Brian N.; Stoll, Rob; Pardyjak, Eric R.

    2018-06-01

    We present a theoretically consistent framework for modelling Lagrangian particle deposition in plant canopies. The primary focus is on describing the probability of particles encountering canopy elements (i.e., potential deposition), and provides a consistent means for including the effects of imperfect deposition through any appropriate sub-model for deposition efficiency. Some aspects of the framework draw upon an analogy to radiation propagation through a turbid medium with which to develop model theory. The present method is compared against one of the most commonly used heuristic Lagrangian frameworks, namely that originally developed by Legg and Powell (Agricultural Meteorology, 1979, Vol. 20, 47-67), which is shown to be theoretically inconsistent. A recommendation is made to discontinue the use of this heuristic approach in favour of the theoretically consistent framework developed herein, which is no more difficult to apply under equivalent assumptions. The proposed framework has the additional advantage that it can be applied to arbitrary canopy geometries given readily measurable parameters describing vegetation structure.

  19. Theoretical Hill-Type Muscle and Stability: Numerical Model and Application

    Directory of Open Access Journals (Sweden)

    S. Schmitt

    2013-01-01

    Full Text Available The construction of artificial muscles is one of the most challenging developments in today’s biomedical science. The application of artificial muscles is focused both on the construction of orthotics and prosthetics for rehabilitation and prevention purposes and on building humanoid walking machines for robotics research. Research in biomechanics tries to explain the functioning and design of real biological muscles and therefore lays the fundament for the development of functional artificial muscles. Recently, the hyperbolic Hill-type force-velocity relation was derived from simple mechanical components. In this contribution, this theoretical yet biomechanical model is transferred to a numerical model and applied for presenting a proof-of-concept of a functional artificial muscle. Additionally, this validated theoretical model is used to determine force-velocity relations of different animal species that are based on the literature data from biological experiments. Moreover, it is shown that an antagonistic muscle actuator can help in stabilising a single inverted pendulum model in favour of a control approach using a linear torque generator.

  20. A theoretical model to describe progressions and regressions for exercise rehabilitation.

    Science.gov (United States)

    Blanchard, Sam; Glasgow, Phil

    2014-08-01

    This article aims to describe a new theoretical model to simplify and aid visualisation of the clinical reasoning process involved in progressing a single exercise. Exercise prescription is a core skill for physiotherapists but is an area that is lacking in theoretical models to assist clinicians when designing exercise programs to aid rehabilitation from injury. Historical models of periodization and motor learning theories lack any visual aids to assist clinicians. The concept of the proposed model is that new stimuli can be added or exchanged with other stimuli, either intrinsic or extrinsic to the participant, in order to gradually progress an exercise whilst remaining safe and effective. The proposed model maintains the core skills of physiotherapists by assisting clinical reasoning skills, exercise prescription and goal setting. It is not limited to any one pathology or rehabilitation setting and can adapted by any level of skilled clinician. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Theoretical methods and models for mechanical properties of soft biomaterials

    Directory of Open Access Journals (Sweden)

    Zhonggang Feng

    2017-06-01

    Full Text Available We review the most commonly used theoretical methods and models for the mechanical properties of soft biomaterials, which include phenomenological hyperelastic and viscoelastic models, structural biphasic and network models, and the structural alteration theory. We emphasize basic concepts and recent developments. In consideration of the current progress and needs of mechanobiology, we introduce methods and models for tackling micromechanical problems and their applications to cell biology. Finally, the challenges and perspectives in this field are discussed.

  2. Transport simulations TFTR: Theoretically-based transport models and current scaling

    International Nuclear Information System (INIS)

    Redi, M.H.; Cummings, J.C.; Bush, C.E.; Fredrickson, E.; Grek, B.; Hahm, T.S.; Hill, K.W.; Johnson, D.W.; Mansfield, D.K.; Park, H.; Scott, S.D.; Stratton, B.C.; Synakowski, E.J.; Tang, W.M.; Taylor, G.

    1991-12-01

    In order to study the microscopic physics underlying observed L-mode current scaling, 1-1/2-d BALDUR has been used to simulate density and temperature profiles for high and low current, neutral beam heated discharges on TFTR with several semi-empirical, theoretically-based models previously compared for TFTR, including several versions of trapped electron drift wave driven transport. Experiments at TFTR, JET and D3-D show that I p scaling of τ E does not arise from edge modes as previously thought, and is most likely to arise from nonlocal processes or from the I p -dependence of local plasma core transport. Consistent with this, it is found that strong current scaling does not arise from any of several edge models of resistive ballooning. Simulations with the profile consistent drift wave model and with a new model for toroidal collisionless trapped electron mode core transport in a multimode formalism, lead to strong current scaling of τ E for the L-mode cases on TFTR. None of the theoretically-based models succeeded in simulating the measured temperature and density profiles for both high and low current experiments

  3. Theoretical Modeling of Rock Breakage by Hydraulic and Mechanical Tool

    Directory of Open Access Journals (Sweden)

    Hongxiang Jiang

    2014-01-01

    Full Text Available Rock breakage by coupled mechanical and hydraulic action has been developed over the past several decades, but theoretical study on rock fragmentation by mechanical tool with water pressure assistance was still lacking. The theoretical model of rock breakage by mechanical tool was developed based on the rock fracture mechanics and the solution of Boussinesq’s problem, and it could explain the process of rock fragmentation as well as predicating the peak reacting force. The theoretical model of rock breakage by coupled mechanical and hydraulic action was developed according to the superposition principle of intensity factors at the crack tip, and the reacting force of mechanical tool assisted by hydraulic action could be reduced obviously if the crack with a critical length could be produced by mechanical or hydraulic impact. The experimental results indicated that the peak reacting force could be reduced about 15% assisted by medium water pressure, and quick reduction of reacting force after peak value decreased the specific energy consumption of rock fragmentation by mechanical tool. The crack formation by mechanical or hydraulic impact was the prerequisite to improvement of the ability of combined breakage.

  4. A Theoretical Bayesian Game Model for the Vendor-Retailer Relation

    Directory of Open Access Journals (Sweden)

    Emil CRIŞAN

    2012-06-01

    Full Text Available We consider an equilibrated supply chain with two equal partners, a vendor and a retailer (also called newsboy type products supply chain. The actions of each partner are driven by profit. Given the fact that at supply chain level are specific external influences which affect the costs and concordant the profit, we use a game theoretic model for the situation, considering costs and demand. At theoretical level, symmetric and asymmetric information patterns are considered for this situation. There are at every supply chain’s level situations when external factors (such as inflation, raw-material rate influence the situation of each partner even if the information is well shared within the chain. The model we propose considers both the external factors and asymmetric information within a supply chain.

  5. Quantitative graph theory mathematical foundations and applications

    CERN Document Server

    Dehmer, Matthias

    2014-01-01

    The first book devoted exclusively to quantitative graph theory, Quantitative Graph Theory: Mathematical Foundations and Applications presents and demonstrates existing and novel methods for analyzing graphs quantitatively. Incorporating interdisciplinary knowledge from graph theory, information theory, measurement theory, and statistical techniques, this book covers a wide range of quantitative-graph theoretical concepts and methods, including those pertaining to real and random graphs such as:Comparative approaches (graph similarity or distance)Graph measures to characterize graphs quantitat

  6. Quantitative genetic methods depending on the nature of the phenotypic trait.

    Science.gov (United States)

    de Villemereuil, Pierre

    2018-01-24

    A consequence of the assumptions of the infinitesimal model, one of the most important theoretical foundations of quantitative genetics, is that phenotypic traits are predicted to be most often normally distributed (so-called Gaussian traits). But phenotypic traits, especially those interesting for evolutionary biology, might be shaped according to very diverse distributions. Here, I show how quantitative genetics tools have been extended to account for a wider diversity of phenotypic traits using first the threshold model and then more recently using generalized linear mixed models. I explore the assumptions behind these models and how they can be used to study the genetics of non-Gaussian complex traits. I also comment on three recent methodological advances in quantitative genetics that widen our ability to study new kinds of traits: the use of "modular" hierarchical modeling (e.g., to study survival in the context of capture-recapture approaches for wild populations); the use of aster models to study a set of traits with conditional relationships (e.g., life-history traits); and, finally, the study of high-dimensional traits, such as gene expression. © 2018 New York Academy of Sciences.

  7. Theoretical and Empirical Review of Asset Pricing Models: A Structural Synthesis

    Directory of Open Access Journals (Sweden)

    Saban Celik

    2012-01-01

    Full Text Available The purpose of this paper is to give a comprehensive theoretical review devoted to asset pricing models by emphasizing static and dynamic versions in the line with their empirical investigations. A considerable amount of financial economics literature devoted to the concept of asset pricing and their implications. The main task of asset pricing model can be seen as the way to evaluate the present value of the pay offs or cash flows discounted for risk and time lags. The difficulty coming from discounting process is that the relevant factors that affect the pay offs vary through the time whereas the theoretical framework is still useful to incorporate the changing factors into an asset pricing models. This paper fills the gap in literature by giving a comprehensive review of the models and evaluating the historical stream of empirical investigations in the form of structural empirical review.

  8. On the quantitativeness of EDS STEM

    Energy Technology Data Exchange (ETDEWEB)

    Lugg, N.R. [Institute of Engineering Innovation, The University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-8656 (Japan); Kothleitner, G. [Institute for Electron Microscopy and Nanoanalysis, Graz University of Technology, Steyrergasse 17, 8010 Graz (Austria); Centre for Electron Microscopy, Steyrergasse 17, 8010 Graz (Austria); Shibata, N.; Ikuhara, Y. [Institute of Engineering Innovation, The University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-04-15

    Chemical mapping using energy dispersive X-ray spectroscopy (EDS) in scanning transmission electron microscopy (STEM) has recently shown to be a powerful technique in analyzing the elemental identity and location of atomic columns in materials at atomic resolution. However, most applications of EDS STEM have been used only to qualitatively map whether elements are present at specific sites. Obtaining calibrated EDS STEM maps so that they are on an absolute scale is a difficult task and even if one achieves this, extracting quantitative information about the specimen – such as the number or density of atoms under the probe – adds yet another layer of complexity to the analysis due to the multiple elastic and inelastic scattering of the electron probe. Quantitative information may be obtained by comparing calibrated EDS STEM with theoretical simulations, but in this case a model of the structure must be assumed a priori. Here we first theoretically explore how exactly elastic and thermal scattering of the probe confounds the quantitative information one is able to extract about the specimen from an EDS STEM map. We then show using simulation how tilting the specimen (or incident probe) can reduce the effects of scattering and how it can provide quantitative information about the specimen. We then discuss drawbacks of this method – such as the loss of atomic resolution along the tilt direction – but follow this with a possible remedy: precession averaged EDS STEM mapping. - Highlights: • Signal obtained in EDS STEM maps (of STO) compared to non-channelling signal. • Deviation from non-channelling signal occurs in on-axis experiments. • Tilting specimen: signal close to non-channelling case but atomic resolution is lost. • Tilt-precession series: non-channelling signal and atomic-resolution features obtained. • Associated issues are discussed.

  9. Tesla coil theoretical model and experimental verification

    OpenAIRE

    Voitkans, Janis; Voitkans, Arnis

    2014-01-01

    Abstract – In this paper a theoretical model of a Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wired format, where the line voltage is measured against electrically neutral space. It is shown that equivalent two-wired scheme can be found for a single-wired scheme and already known long line theory can be applied to a Tesla coil. Formulas for calculation of voltage in a Tesla coil by coordinate and calculation of resonance fre...

  10. Model for Quantitative Evaluation of Enzyme Replacement Treatment

    Directory of Open Access Journals (Sweden)

    Radeva B.

    2009-12-01

    Full Text Available Gaucher disease is the most frequent lysosomal disorder. Its enzyme replacement treatment was the new progress of modern biotechnology, successfully used in the last years. The evaluation of optimal dose of each patient is important due to health and economical reasons. The enzyme replacement is the most expensive treatment. It must be held continuously and without interruption. Since 2001, the enzyme replacement therapy with Cerezyme*Genzyme was formally introduced in Bulgaria, but after some time it was interrupted for 1-2 months. The dose of the patients was not optimal. The aim of our work is to find a mathematical model for quantitative evaluation of ERT of Gaucher disease. The model applies a kind of software called "Statistika 6" via the input of the individual data of 5-year-old children having the Gaucher disease treated with Cerezyme. The output results of the model gave possibilities for quantitative evaluation of the individual trends in the development of the disease of each child and its correlation. On the basis of this results, we might recommend suitable changes in ERT.

  11. A Review on Quantitative Models for Sustainable Food Logistics Management

    Directory of Open Access Journals (Sweden)

    M. Soysal

    2012-12-01

    Full Text Available The last two decades food logistics systems have seen the transition from a focus on traditional supply chain management to food supply chain management, and successively, to sustainable food supply chain management. The main aim of this study is to identify key logistical aims in these three phases and analyse currently available quantitative models to point out modelling challenges in sustainable food logistics management (SFLM. A literature review on quantitative studies is conducted and also qualitative studies are consulted to understand the key logistical aims more clearly and to identify relevant system scope issues. Results show that research on SFLM has been progressively developing according to the needs of the food industry. However, the intrinsic characteristics of food products and processes have not yet been handled properly in the identified studies. The majority of the works reviewed have not contemplated on sustainability problems, apart from a few recent studies. Therefore, the study concludes that new and advanced quantitative models are needed that take specific SFLM requirements from practice into consideration to support business decisions and capture food supply chain dynamics.

  12. Theoretical Model for the Performance of Liquid Ring Pump Based on the Actual Operating Cycle

    Directory of Open Access Journals (Sweden)

    Si Huang

    2017-01-01

    Full Text Available Liquid ring pump is widely applied in many industry fields due to the advantages of isothermal compression process, simple structure, and liquid-sealing. Based on the actual operating cycle of “suction-compression-discharge-expansion,” a universal theoretical model for performance of liquid ring pump was established in this study, to solve the problem that the theoretical models deviated from the actual performance in operating cycle. With the major geometric parameters and operating conditions of a liquid ring pump, the performance parameters such as the actual capacity for suction and discharge, shaft power, and global efficiency can be conveniently predicted by the proposed theoretical model, without the limitation of empiric range, performance data, or the detailed 3D geometry of pumps. The proposed theoretical model was verified by experimental performances of liquid ring pumps and could provide a feasible tool for the application of liquid ring pump.

  13. [Quantitative models between canopy hyperspectrum and its component features at apple tree prosperous fruit stage].

    Science.gov (United States)

    Wang, Ling; Zhao, Geng-xing; Zhu, Xi-cun; Lei, Tong; Dong, Fang

    2010-10-01

    Hyperspectral technique has become the basis of quantitative remote sensing. Hyperspectrum of apple tree canopy at prosperous fruit stage consists of the complex information of fruits, leaves, stocks, soil and reflecting films, which was mostly affected by component features of canopy at this stage. First, the hyperspectrum of 18 sample apple trees with reflecting films was compared with that of 44 trees without reflecting films. It could be seen that the impact of reflecting films on reflectance was obvious, so the sample trees with ground reflecting films should be separated to analyze from those without ground films. Secondly, nine indexes of canopy components were built based on classified digital photos of 44 apple trees without ground films. Thirdly, the correlation between the nine indexes and canopy reflectance including some kinds of conversion data was analyzed. The results showed that the correlation between reflectance and the ratio of fruit to leaf was the best, among which the max coefficient reached 0.815, and the correlation between reflectance and the ratio of leaf was a little better than that between reflectance and the density of fruit. Then models of correlation analysis, linear regression, BP neural network and support vector regression were taken to explain the quantitative relationship between the hyperspectral reflectance and the ratio of fruit to leaf with the softwares of DPS and LIBSVM. It was feasible that all of the four models in 611-680 nm characteristic band are feasible to be used to predict, while the model accuracy of BP neural network and support vector regression was better than one-variable linear regression and multi-variable regression, and the accuracy of support vector regression model was the best. This study will be served as a reliable theoretical reference for the yield estimation of apples based on remote sensing data.

  14. Decision support models for solid waste management: Review and game-theoretic approaches

    International Nuclear Information System (INIS)

    Karmperis, Athanasios C.; Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios

    2013-01-01

    Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed

  15. Decision support models for solid waste management: Review and game-theoretic approaches

    Energy Technology Data Exchange (ETDEWEB)

    Karmperis, Athanasios C., E-mail: athkarmp@mail.ntua.gr [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece); Army Corps of Engineers, Hellenic Army General Staff, Ministry of Defence (Greece); Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece)

    2013-05-15

    Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed.

  16. Toward a Theoretical Model of Employee Turnover: A Human Resource Development Perspective

    Science.gov (United States)

    Peterson, Shari L.

    2004-01-01

    This article sets forth the Organizational Model of Employee Persistence, influenced by traditional turnover models and a student attrition model. The model was developed to clarify the impact of organizational practices on employee turnover from a human resource development (HRD) perspective and provide a theoretical foundation for research on…

  17. A Comparative Study of Theoretical Graph Models for Characterizing Structural Networks of Human Brain

    Directory of Open Access Journals (Sweden)

    Xiaojin Li

    2013-01-01

    Full Text Available Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY and scale-free gene duplication model (SF-GD, that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  18. Measuring and Managing Value Co-Creation Process: Overview of Existing Theoretical Models

    Directory of Open Access Journals (Sweden)

    Monika Skaržauskaitė

    2013-08-01

    Full Text Available Purpose — the article is to provide a holistic view on concept of value co-creation and existing models for measuring and managing it by conducting theoretical analysis of scientific literature sources targeting the integration of various approaches. Most important and relevant results of the literature study are presented with a focus on changed roles of organizations and consumers. This article aims at contributing theoretically to the research stream of measuring co-creation of value in order to gain knowledge for improvement of organizational performance and enabling new and innovative means of value creation. Design/methodology/approach. The nature of this research is exploratory – theoretical analysis and synthesis of scientific literature sources targeting the integration of various approaches was performed. This approach was chosen due to the absence of established theory on models of co-creation, possible uses in organizations and systematic overview of tools measuring/suggesting how to measure co-creation. Findings. While the principles of managing and measuring co-creation in regards of consumer motivation and involvement are widely researched, little attempt has been made to identify critical factors and create models dealing with organizational capabilities and managerial implications of value co-creation. Systematic analysis of literature revealed a gap not only in empirical research concerning organization’s role in co-creation process, but in theoretical and conceptual levels, too. Research limitations/implications. The limitations of this work as a literature review lies in its nature – the complete reliance on previously published research papers and the availability of these studies. For a deeper understanding of co-creation management and for developing models that can be used in real-life organizations, a broader theoretical, as well as empirical, research is necessary. Practical implications. Analysis of the

  19. Theoretical model for the mechanical behavior of prestressed beams under torsion

    Directory of Open Access Journals (Sweden)

    Sérgio M.R. Lopes

    2014-12-01

    Full Text Available In this article, a global theoretical model previously developed and validated by the authors for reinforced concrete beams under torsion is reviewed and corrected in order to predict the global behavior of beams under torsion with uniform longitudinal prestress. These corrections are based on the introduction of prestress factors and on the modification of the equilibrium equations in order to incorporate the contribution of the prestressing reinforcement. The theoretical results obtained with the new model are compared with some available results of prestressed concrete (PC beams under torsion found in the literature. The results obtained in this study validate the proposed computing procedure to predict the overall behavior of PC beams under torsion.

  20. Quantitative comparison between theoretical predictions and experimental results for the BCS-BEC crossover

    International Nuclear Information System (INIS)

    Perali, A.; Pieri, P.; Strinati, G.C.

    2004-01-01

    Theoretical predictions for the Bardeen-Cooper-Schrieffer-Bose-Einstein condensation crossover of trapped Fermi atoms are compared with recent experimental results for the density profiles of L 6 i. The calculations rest on a single theoretical approach that includes pairing fluctuations beyond mean-field. Excellent agreement with experimental results is obtained. Theoretical predictions for the zero-temperature chemical potential and gap at the unitarity limit are also found to compare extremely well with Quantum Monte Carlo simulations and with recent experimental results

  1. The demand-induced strain compensation model : renewed theoretical considerations and empirical evidence

    NARCIS (Netherlands)

    de Jonge, J.; Dormann, C.; van den Tooren, M.; Näswall, K.; Hellgren, J.; Sverke, M.

    2008-01-01

    This chapter presents a recently developed theoretical model on jobrelated stress and performance, the so-called Demand-Induced Strain Compensation (DISC) model. The DISC model predicts in general that adverse health effects of high job demands can best be compensated for by matching job resources

  2. Nursing management of sensory overload in psychiatry – Theoretical densification and modification of the framework model

    Science.gov (United States)

    Scheydt, Stefan; Needham, Ian; Behrens, Johann

    2017-01-01

    Background: Within the scope of the research project on the subjects of sensory overload and stimulus regulation, a theoretical framework model of the nursing care of patients with sensory overload in psychiatry was developed. In a second step, this theoretical model should now be theoretically compressed and, if necessary, modified. Aim: Empirical verification as well as modification, enhancement and theoretical densification of the framework model of nursing care of patients with sensory overload in psychiatry. Method: Analysis of 8 expert interviews by summarizing and structuring content analysis methods based on Meuser and Nagel (2009) as well as Mayring (2010). Results: The developed framework model (Scheydt et al., 2016b) could be empirically verified, theoretically densificated and extended by one category (perception modulation). Thus, four categories of nursing care of patients with sensory overload can be described in inpatient psychiatry: removal from stimuli, modulation of environmental factors, perceptual modulation as well as help somebody to help him- or herself / coping support. Conclusions: Based on the methodological approach, a relatively well-saturated, credible conceptualization of a theoretical model for the description of the nursing care of patients with sensory overload in stationary psychiatry could be worked out. In further steps, these measures have to be further developed, implemented and evaluated regarding to their efficacy.

  3. Experimental and theoretical analysis of integrated circuit (IC) chips on flexible substrates subjected to bending

    Science.gov (United States)

    Chen, Ying; Yuan, Jianghong; Zhang, Yingchao; Huang, Yonggang; Feng, Xue

    2017-10-01

    The interfacial failure of integrated circuit (IC) chips integrated on flexible substrates under bending deformation has been studied theoretically and experimentally. A compressive buckling test is used to impose the bending deformation onto the interface between the IC chip and the flexible substrate quantitatively, after which the failed interface is investigated using scanning electron microscopy. A theoretical model is established based on the beam theory and a bi-layer interface model, from which an analytical expression of the critical curvature in relation to the interfacial failure is obtained. The relationships between the critical curvature, the material, and the geometric parameters of the device are discussed in detail, providing guidance for future optimization flexible circuits based on IC chips.

  4. Anticipatory Cognitive Systems: a Theoretical Model

    Science.gov (United States)

    Terenzi, Graziano

    This paper deals with the problem of understanding anticipation in biological and cognitive systems. It is argued that a physical theory can be considered as biologically plausible only if it incorporates the ability to describe systems which exhibit anticipatory behaviors. The paper introduces a cognitive level description of anticipation and provides a simple theoretical characterization of anticipatory systems on this level. Specifically, a simple model of a formal anticipatory neuron and a model (i.e. the τ-mirror architecture) of an anticipatory neural network which is based on the former are introduced and discussed. The basic feature of this architecture is that a part of the network learns to represent the behavior of the other part over time, thus constructing an implicit model of its own functioning. As a consequence, the network is capable of self-representation; anticipation, on a oscopic level, is nothing but a consequence of anticipation on a microscopic level. Some learning algorithms are also discussed together with related experimental tasks and possible integrations. The outcome of the paper is a formal characterization of anticipation in cognitive systems which aims at being incorporated in a comprehensive and more general physical theory.

  5. The theoretical aspects of UrQMD & AMPT models

    Energy Technology Data Exchange (ETDEWEB)

    Saini, Abhilasha, E-mail: kashvini.abhi@gmail.com [Research Scholar, Department of Physics, Suresh Gyan vihar University, Jaipur (India); Bhardwaj, Sudhir, E-mail: sudhir.hep@gmail.com [Assistant professor, Govt. College of Engineering & Technology, Bikaner (India)

    2016-05-06

    The field of high energy physics is very challenging in carrying out theories and experiments to unlock the secrets of heavy ion collisions and still not cracked and solved completely. There are many theoretical queries; some may be due to the inherent causes like the non-perturbative nature of QCD in the strong coupling limit, also due to the multi-particle production and evolution during the heavy ion collisions which increase the complexity of the phenomena. So for the purpose of understanding the phenomena, variety of theories and ideas are developed which are usually implied in the form of Monte-Carlo codes. The UrQMD model and the AMPT model are discussed here in detail. These methods are useful in modeling the nuclear collisions.

  6. Modeling Logistic Performance in Quantitative Microbial Risk Assessment

    NARCIS (Netherlands)

    Rijgersberg, H.; Tromp, S.O.; Jacxsens, L.; Uyttendaele, M.

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage

  7. Modeling goals and functions of control and safety systems - theoretical foundations and extensions of MFM

    International Nuclear Information System (INIS)

    Lind, M.

    2005-10-01

    Multilevel Flow Modeling (MFM) has proven to be an effective modeling tool for reasoning about plant failure and control strategies and is currently exploited for operator support in diagnosis and on-line alarm analysis. Previous MFM research was focussed on representing goals and functions of process plants which generate, transform and distribute mass and energy. However, only a limited consideration has been given to the problems of modeling the control systems. Control functions are indispensable for operating any industrial plant. But modeling of control system functions has proven to be a more challenging problem than modeling functions of energy and mass processes. The problems were discussed by Lind and tentative solutions has been proposed but have not been investigated in depth until recently, partly due to the lack of an appropriate theoretical foundation. The purposes of the present report are to show that such a theoretical foundation for modeling goals and functions of control systems can be built from concepts and theories of action developed by Von Wright and to show how the theoretical foundation can be used to extend MFM with concepts for modeling control systems. The theoretical foundations has been presented in detail elsewhere by the present author without the particular focus on modeling control actions and MFM adopted here. (au)

  8. Modeling goals and functions of control and safety systems -theoretical foundations and extensions of MFM

    Energy Technology Data Exchange (ETDEWEB)

    Lind, M. [Oersted - DTU, Kgs. Lyngby (Denmark)

    2005-10-01

    Multilevel Flow Modeling (MFM) has proven to be an effective modeling tool for reasoning about plant failure and control strategies and is currently exploited for operator support in diagnosis and on-line alarm analysis. Previous MFM research was focussed on representing goals and functions of process plants which generate, transform and distribute mass and energy. However, only a limited consideration has been given to the problems of modeling the control systems. Control functions are indispensable for operating any industrial plant. But modeling of control system functions has proven to be a more challenging problem than modeling functions of energy and mass processes. The problems were discussed by Lind and tentative solutions has been proposed but have not been investigated in depth until recently, partly due to the lack of an appropriate theoretical foundation. The purposes of the present report are to show that such a theoretical foundation for modeling goals and functions of control systems can be built from concepts and theories of action developed by Von Wright and to show how the theoretical foundation can be used to extend MFM with concepts for modeling control systems. The theoretical foundations has been presented in detail elsewhere by the present author without the particular focus on modeling control actions and MFM adopted here. (au)

  9. Modeling Organizational Design - Applying A Formalism Model From Theoretical Physics

    Directory of Open Access Journals (Sweden)

    Robert Fabac

    2008-06-01

    Full Text Available Modern organizations are exposed to diverse external environment influences. Currently accepted concepts of organizational design take into account structure, its interaction with strategy, processes, people, etc. Organization design and planning aims to align this key organizational design variables. At the higher conceptual level, however, completely satisfactory formulation for this alignment doesn’t exist. We develop an approach originating from the application of concepts of theoretical physics to social systems. Under this approach, the allocation of organizational resources is analyzed in terms of social entropy, social free energy and social temperature. This allows us to formalize the dynamic relationship between organizational design variables. In this paper we relate this model to Galbraith's Star Model and we also suggest improvements in the procedure of the complex analytical method in organizational design.

  10. Lack of quantitative training among early-career ecologists: a survey of the problem and potential solutions

    Directory of Open Access Journals (Sweden)

    Frédéric Barraquand

    2014-03-01

    Full Text Available Proficiency in mathematics and statistics is essential to modern ecological science, yet few studies have assessed the level of quantitative training received by ecologists. To do so, we conducted an online survey. The 937 respondents were mostly early-career scientists who studied biology as undergraduates. We found a clear self-perceived lack of quantitative training: 75% were not satisfied with their understanding of mathematical models; 75% felt that the level of mathematics was “too low” in their ecology classes; 90% wanted more mathematics classes for ecologists; and 95% more statistics classes. Respondents thought that 30% of classes in ecology-related degrees should be focused on quantitative disciplines, which is likely higher than for most existing programs. The main suggestion to improve quantitative training was to relate theoretical and statistical modeling to applied ecological problems. Improving quantitative training will require dedicated, quantitative classes for ecology-related degrees that contain good mathematical and statistical practice.

  11. The neural mediators of kindness-based meditation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Jennifer Streiffer Mascaro

    2015-02-01

    Full Text Available Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another’s affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work.

  12. A Theoretical Model for the Prediction of Siphon Breaking Phenomenon

    International Nuclear Information System (INIS)

    Bae, Youngmin; Kim, Young-In; Seo, Jae-Kwang; Kim, Keung Koo; Yoon, Juhyeon

    2014-01-01

    A siphon phenomenon or siphoning often refers to the movement of liquid from a higher elevation to a lower one through a tube in an inverted U shape (whose top is typically located above the liquid surface) under the action of gravity, and has been used in a variety of reallife applications such as a toilet bowl and a Greedy cup. However, liquid drainage due to siphoning sometimes needs to be prevented. For example, a siphon breaker, which is designed to limit the siphon effect by allowing the gas entrainment into a siphon line, is installed in order to maintain the pool water level above the reactor core when a loss of coolant accident (LOCA) occurs in an open-pool type research reactor. In this paper, we develop a theoretical model to predict the siphon breaking phenomenon. In this paper, a theoretical model to predict the siphon breaking phenomenon is developed. It is shown that the present model predicts well the fundamental features of the siphon breaking phenomenon and undershooting height

  13. A Theoretical Model for the Prediction of Siphon Breaking Phenomenon

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Youngmin; Kim, Young-In; Seo, Jae-Kwang; Kim, Keung Koo; Yoon, Juhyeon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    A siphon phenomenon or siphoning often refers to the movement of liquid from a higher elevation to a lower one through a tube in an inverted U shape (whose top is typically located above the liquid surface) under the action of gravity, and has been used in a variety of reallife applications such as a toilet bowl and a Greedy cup. However, liquid drainage due to siphoning sometimes needs to be prevented. For example, a siphon breaker, which is designed to limit the siphon effect by allowing the gas entrainment into a siphon line, is installed in order to maintain the pool water level above the reactor core when a loss of coolant accident (LOCA) occurs in an open-pool type research reactor. In this paper, we develop a theoretical model to predict the siphon breaking phenomenon. In this paper, a theoretical model to predict the siphon breaking phenomenon is developed. It is shown that the present model predicts well the fundamental features of the siphon breaking phenomenon and undershooting height.

  14. Quantitative models for sustainable supply chain management

    DEFF Research Database (Denmark)

    Brandenburg, M.; Govindan, Kannan; Sarkis, J.

    2014-01-01

    and directions of this research area, this paper provides a content analysis of 134 carefully identified papers on quantitative, formal models that address sustainability aspects in the forward SC. It was found that a preponderance of the publications and models appeared in a limited set of six journals......Sustainability, the consideration of environmental factors and social aspects, in supply chain management (SCM) has become a highly relevant topic for researchers and practitioners. The application of operations research methods and related models, i.e. formal modeling, for closed-loop SCM...... and reverse logistics has been effectively reviewed in previously published research. This situation is in contrast to the understanding and review of mathematical models that focus on environmental or social factors in forward supply chains (SC), which has seen less investigation. To evaluate developments...

  15. Physical aspects of quantitative particles analysis by X-ray fluorescence and electron microprobe techniques

    International Nuclear Information System (INIS)

    Markowicz, A.

    1986-01-01

    The aim of this work is to present both physical fundamentals and recent advances in quantitative particles analysis by X-ray fluorescence (XRF) and electron microprobe (EPXMA) techniques. A method of correction for the particle-size effect in XRF analysis is described and theoretically evaluated. New atomic number- and absorption correction procedures in EPXMA of individual particles are proposed. The applicability of these two correction methods is evaluated for a wide range of elemental composition, X-ray energy and sample thickness. Also, a theoretical model for composition and thickness dependence of Bremsstrahlung background generated in multielement bulk specimens as well as thin films and particles are presented and experimantally evaluated. Finally, the limitations and further possible improvements in quantitative particles analysis by XFR and EPXMA are discussed. 109 refs. (author)

  16. Performance Theories for Sentence Coding: Some Quantitative Models

    Science.gov (United States)

    Aaronson, Doris; And Others

    1977-01-01

    This study deals with the patterns of word-by-word reading times over a sentence when the subject must code the linguistic information sufficiently for immediate verbatim recall. A class of quantitative models is considered that would account for reading times at phrase breaks. (Author/RM)

  17. Imitative Modeling as a Theoretical Base for Instructing Language-Disordered Children

    Science.gov (United States)

    Courtright, John A.; Courtright, Illene C.

    1976-01-01

    A modification of A. Bandura's social learning theory (imitative modeling) was employed as a theoretical base for language instruction with eight language disordered children (5 to 10 years old). (Author/SBH)

  18. Methodologies for quantitative systems pharmacology (QSP) models : Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, Hp; Agoram, B.; Davies, M.R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, Ph.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  19. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, H. P.; Agoram, B.; Davies, M. R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, P. H.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  20. Recent evolution of theoretical models in inner shell photoionization

    International Nuclear Information System (INIS)

    Combet Farnoux, F.

    1978-01-01

    This paper is a brief review of various atomic theoretical models recently developed to calculate photoionization cross sections in the low energy range (from the far ultraviolet to the soft X ray region). For both inner and outer shells concerned, we emphasize the necessity to go beyond the independent particle models by means of the introduction of correlation effects in both initial and final states. The basic physical ideas of as elaborated models as Random Phase Approximation with exchange, Many Body Perturbation Theory and R matrix Theory are outlined and summarized. As examples, the results of some calculations are shown and compared with experiment

  1. Theoretical Basis for the CE-QUAL-W2 River Basin Model

    National Research Council Canada - National Science Library

    Wells, Scott

    2000-01-01

    This report describes the theoretical development for CE-QUAL-W2, Version 3, that will allow the application of the model to entire water basins including multiple reservoirs, steeply sloping rivers, and estuaries...

  2. A utility-theoretic model for QALYs and willingness to pay.

    Science.gov (United States)

    Klose, Thomas

    2003-01-01

    Despite the widespread use of quality-adjusted life years (QALY) in economic evaluation studies, their utility-theoretic foundation remains unclear. A model for preferences over health, money, and time is presented in this paper. Under the usual assumptions of the original QALY-model, an additive separable presentation of the utilities in different periods exists. In contrast to the usual assumption that QALY-weights do solely depend on aspects of health-related quality of life, wealth-standardized QALY-weights might vary with the wealth level in the presented extension of the original QALY-model resulting in an inconsistent measurement of QALYs. Further assumptions are presented to make the measurement of QALYs consistent with lifetime preferences over health and money. Even under these strict assumptions, QALYs and WTP (which also can be defined in this utility-theoretic model) are not equivalent preference-based measures of the effects of health technologies on an individual level. The results suggest that the individual WTP per QALY can depend on the magnitude of the QALY-gain as well as on the disease burden, when health influences the marginal utility of wealth. Further research seems to be indicated on this structural aspect of preferences over health and wealth and to quantify its impact. Copyright 2002 John Wiley & Sons, Ltd.

  3. Theoretical investigation of the decay of an SF6 gas-blast arc using a two-temperature hydrodynamic model

    International Nuclear Information System (INIS)

    Wang Weizong; Rong Mingzhe; Yan, Joseph D; Spencer, Joseph W; Murphy, Anthony B

    2013-01-01

    The behaviour of a decaying SF 6 arc, which is representative of the approach to the final current-zero state of switching arcs in a high-voltage circuit breaker, is theoretically investigated by a two-temperature hydrodynamic model, taking into account the possible departure of the plasma state from local thermodynamic equilibrium (LTE). The model couples the plasma flow with electromagnetic fields in a self-consistent manner. The electrons and heavy species are assumed to have different temperatures. The species composition, thermodynamic properties and transport coefficients of the plasma under non-LTE conditions are calculated from fundamental theory. The model is then applied to a two-dimensional axisymmetric SF 6 arc burning in a supersonic nozzle under well-controlled conditions; for this configuration, experimental results are available for comparison. The effect of turbulence is considered using the Prandtl mixing-length model. The edge absorption of the radiation emitted by the arc core is taken into account by a modified net emission coefficient approach. The complete set of conservation equations is discretized and solved using the finite volume method. The evolution of electron and heavy-particle temperatures and the total arc resistance, along with other physical quantities, is carefully analysed and compared with those of the LTE case. It is demonstrated that the electron and heavy-particle temperature diverge at all times in the plasma-cold-flow interaction region, in which strong gas flow exists, and further in the transient current-zero period, in which case the collision energy exchange is ineffective. This study quantitatively analyses the energy exchange mechanisms between electrons and heavy particles in the high-pressure supersonic SF 6 arcs and provides the foundation for further theoretical investigation of transient SF 6 arc behaviour as the current ramps down to zero in gas-blast circuit breakers.

  4. SOCIOLOGICAL UNDERSTANDING OF INTERNET: THEORETICAL APPROACHES TO THE NETWORK ANALYSIS

    Directory of Open Access Journals (Sweden)

    D. E. Dobrinskaya

    2016-01-01

    Full Text Available The network is an efficient way of social structure analysis for contemporary sociologists. It gives broad opportunities for detailed and fruitful research of different patterns of ties and social relations by quantitative analytical methods and visualization of network models. The network metaphor is used as the most representative tool for description of a new type of society. This new type is characterized by flexibility, decentralization and individualization. Network organizational form became the dominant form in modern societies. The network is also used as a mode of inquiry. Actually three theoretical network approaches in the Internet research case are the most relevant: social network analysis, “network society” theory and actor-network theory. Every theoretical approach has got its own notion of network. Their special methodological and theoretical features contribute to the Internet studies in different ways. The article represents a brief overview of these network approaches. This overview demonstrates the absence of a unified semantic space of the notion of “network” category. This fact, in turn, points out the need for detailed analysis of these approaches to reveal their theoretical and empirical possibilities in application to the Internet studies. 

  5. Application of a two fluid theoretical plasma transport model on current tokamak reactor designs

    International Nuclear Information System (INIS)

    Ibrahim, E.; Fowler, T.K.

    1987-06-01

    In this work, the new theoretical transport models to TIBER II design calculations are described and the results are compared with recent experimental data in large tokamaks (TFTR, JET). Tang's method is extended to a two-fluid model treating ions and electrons separately. This allows for different ion and electron temperatures, as in recent low-density experiments in TFTR, and in the TIBER II design itself. The discussion is divided into two parts: (1) Development of the theoretical transport model and (2) calibration against experiments and application to TIBER II

  6. Redesigning Orientation in an Intensive Care Unit Using 2 Theoretical Models.

    Science.gov (United States)

    Kozub, Elizabeth; Hibanada-Laserna, Maribel; Harget, Gwen; Ecoff, Laurie

    2015-01-01

    To accommodate a higher demand for critical care nurses, an orientation program in a surgical intensive care unit was revised and streamlined. Two theoretical models served as a foundation for the revision and resulted in clear clinical benchmarks for orientation progress evaluation. The purpose of the project was to integrate theoretical frameworks into practice to improve the unit orientation program. Performance improvement methods served as a framework for the revision, and outcomes were measured before and after implementation. The revised orientation program increased 1- and 2-year nurse retention and decreased turnover. Critical care knowledge increased after orientation for both the preintervention and postintervention groups. Incorporating a theoretical basis for orientation has been shown to be successful in increasing the number of nurses completing orientation and improving retention, turnover rates, and knowledge gained.

  7. An Emerging Theoretical Model of Music Therapy Student Development.

    Science.gov (United States)

    Dvorak, Abbey L; Hernandez-Ruiz, Eugenia; Jang, Sekyung; Kim, Borin; Joseph, Megan; Wells, Kori E

    2017-07-01

    Music therapy students negotiate a complex relationship with music and its use in clinical work throughout their education and training. This distinct, pervasive, and evolving relationship suggests a developmental process unique to music therapy. The purpose of this grounded theory study was to create a theoretical model of music therapy students' developmental process, beginning with a study within one large Midwestern university. Participants (N = 15) were music therapy students who completed one 60-minute intensive interview, followed by a 20-minute member check meeting. Recorded interviews were transcribed, analyzed, and coded using open and axial coding. The theoretical model that emerged was a six-step sequential developmental progression that included the following themes: (a) Personal Connection, (b) Turning Point, (c) Adjusting Relationship with Music, (d) Growth and Development, (e) Evolution, and (f) Empowerment. The first three steps are linear; development continues in a cyclical process among the last three steps. As the cycle continues, music therapy students continue to grow and develop their skills, leading to increased empowerment, and more specifically, increased self-efficacy and competence. Further exploration of the model is needed to inform educators' and other key stakeholders' understanding of student needs and concerns as they progress through music therapy degree programs. © the American Music Therapy Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  8. Theoretical modelling of semiconductor surfaces microscopic studies of electrons and photons

    CERN Document Server

    Srivastava, G P

    1999-01-01

    The state-of-the-art theoretical studies of ground state properties, electronic states and atomic vibrations for bulk semiconductors and their surfaces by the application of the pseudopotential method are discussed. Studies of bulk and surface phonon modes have been extended by the application of the phenomenological bond charge model. The coverage of the material, especially of the rapidly growing and technologically important topics of surface reconstruction and chemisorption, is up-to-date and beyond what is currently available in book form. Although theoretical in nature, the book provides

  9. Determination of cognitive development: postnonclassical theoretical model

    Directory of Open Access Journals (Sweden)

    Irina N. Pogozhina

    2015-09-01

    Full Text Available The aim of this research is to develop a postnonclassical cognitive processes content determination model in which mental processes are considered as open selfdeveloping, self-organizing systems. Three types of systems (dynamic, statistical, developing were analysed and compared on the basis of the description of the external and internal characteristics of causation, types of causal chains (dependent, independent and their interactions, as well as the nature of the relationship between the elements of the system (hard, probabilistic, mixed. Mechanisms of open non-equilibrium nonlinear systems (dissipative and four dissipative structures emergence conditions are described. Determination models of mental and behaviour formation and development that were developed under various theoretical approaches (associationism, behaviorism, gestaltism, psychology of intelligence by Piaget, Vygotsky culture historical approach, activity approach and others are mapped on each other as the models that describe behaviour of the three system types mentioned above. The development models of the mental sphere are shown to be different by the following criteria: 1 allocated determinants amount; 2 presence or absence of the system own activity that results in selecting the model not only external, but also internal determinants; 3 types of causal chains (dependent-independent-blended; 4 types of relationships between the causal chain that ultimately determines the subsequent system determination type as decisive (a tough dynamic pattern or stochastic (statistical regularity. The continuity of postnonclassical, classical and non-classical models of mental development determination are described. The process of gradual refinement, complexity, «absorption» of the mental determination by the latter models is characterized. The human mental can be deemed as the functioning of the open developing non-equilibrium nonlinear system (dissipative. The mental sphere is

  10. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  11. Quantitative Phase Imaging Using Hard X Rays

    International Nuclear Information System (INIS)

    Nugent, K.A.; Gureyev, T.E.; Cookson, D.J.; Paganin, D.; Barnea, Z.

    1996-01-01

    The quantitative imaging of a phase object using 16keV xrays is reported. The theoretical basis of the techniques is presented along with its implementation using a synchrotron x-ray source. We find that our phase image is in quantitative agreement with independent measurements of the object. copyright 1996 The American Physical Society

  12. A Quantitative Study of the Relationship between Leadership Practice and Strategic Intentions to Use Cloud Computing

    Science.gov (United States)

    Castillo, Alan F.

    2014-01-01

    The purpose of this quantitative correlational cross-sectional research study was to examine a theoretical model consisting of leadership practice, attitudes of business process outsourcing, and strategic intentions of leaders to use cloud computing and to examine the relationships between each of the variables respectively. This study…

  13. Quantitative chemogenomics: machine-learning models of protein-ligand interaction.

    Science.gov (United States)

    Andersson, Claes R; Gustafsson, Mats G; Strömbergsson, Helena

    2011-01-01

    Chemogenomics is an emerging interdisciplinary field that lies in the interface of biology, chemistry, and informatics. Most of the currently used drugs are small molecules that interact with proteins. Understanding protein-ligand interaction is therefore central to drug discovery and design. In the subfield of chemogenomics known as proteochemometrics, protein-ligand-interaction models are induced from data matrices that consist of both protein and ligand information along with some experimentally measured variable. The two general aims of this quantitative multi-structure-property-relationship modeling (QMSPR) approach are to exploit sparse/incomplete information sources and to obtain more general models covering larger parts of the protein-ligand space, than traditional approaches that focuses mainly on specific targets or ligands. The data matrices, usually obtained from multiple sparse/incomplete sources, typically contain series of proteins and ligands together with quantitative information about their interactions. A useful model should ideally be easy to interpret and generalize well to new unseen protein-ligand combinations. Resolving this requires sophisticated machine-learning methods for model induction, combined with adequate validation. This review is intended to provide a guide to methods and data sources suitable for this kind of protein-ligand-interaction modeling. An overview of the modeling process is presented including data collection, protein and ligand descriptor computation, data preprocessing, machine-learning-model induction and validation. Concerns and issues specific for each step in this kind of data-driven modeling will be discussed. © 2011 Bentham Science Publishers

  14. A Game-Theoretic Model of Grounding for Referential Communication Tasks

    Science.gov (United States)

    Thompson, William

    2009-01-01

    Conversational grounding theory proposes that language use is a form of rational joint action, by which dialog participants systematically and collaboratively add to their common ground of shared knowledge and beliefs. Following recent work applying "game theory" to pragmatics, this thesis develops a game-theoretic model of grounding that…

  15. The laminar flow tube reactor as a quantitative tool for nucleation studies: Experimental results and theoretical analysis of homogeneous nucleation of dibutylphthalate

    International Nuclear Information System (INIS)

    Mikheev, Vladimir B.; Laulainen, Nels S.; Barlow, Stephan E.; Knott, Michael; Ford, Ian J.

    2000-01-01

    A laminar flow tube reactor was designed and constructed to provide an accurate, quantitative measurement of a nucleation rate as a function of supersaturation and temperature. Measurements of nucleation of a supersaturated vapor of dibutylphthalate have been made for the temperature range from -30.3 to +19.1 degree sign C. A thorough analysis of the possible sources of experimental uncertainties (such as defining the correct value of the initial vapor concentration, temperature boundary conditions on the reactor walls, accuracy of the calculations of the thermodynamic parameters of the nucleation zone, and particle concentration measurement) is given. Both isothermal and the isobaric nucleation rates were measured. The experimental data obtained were compared with the measurements of other experimental groups and with theoretical predictions made on the basis of the self-consistency correction nucleation theory. Theoretical analysis, based on the first and the second nucleation theorems, is also presented. The critical cluster size and the excess of internal energy of the critical cluster are obtained. (c) 2000 American Institute of Physics

  16. A theoretical model for prediction of deposition efficiency in cold spraying

    International Nuclear Information System (INIS)

    Li Changjiu; Li Wenya; Wang Yuyue; Yang Guanjun; Fukanuma, H.

    2005-01-01

    The deposition behavior of a spray particle stream with a particle size distribution was theoretically examined for cold spraying in terms of deposition efficiency as a function of particle parameters and spray angle. The theoretical relation was established between the deposition efficiency and spray angle. The experiments were conducted by measuring deposition efficiency at different driving gas conditions and different spray angles using gas-atomized copper powder. It was found that the theoretically estimated results agreed reasonably well with the experimental ones. Based on the theoretical model and experimental results, it was revealed that the distribution of particle velocity resulting from particle size distribution influences significantly the deposition efficiency in cold spraying. It was necessary for the majority of particles to achieve a velocity higher than the critical velocity in order to improve the deposition efficiency. The normal component of particle velocity contributed to the deposition of the particle under the off-nomal spray condition. The deposition efficiency of sprayed particles decreased owing to the decrease of the normal velocity component as spray was performed at off-normal angle

  17. Quantitative Model for Economic Analyses of Information Security Investment in an Enterprise Information System

    Directory of Open Access Journals (Sweden)

    Bojanc Rok

    2012-11-01

    Full Text Available The paper presents a mathematical model for the optimal security-technology investment evaluation and decision-making processes based on the quantitative analysis of security risks and digital asset assessments in an enterprise. The model makes use of the quantitative analysis of different security measures that counteract individual risks by identifying the information system processes in an enterprise and the potential threats. The model comprises the target security levels for all identified business processes and the probability of a security accident together with the possible loss the enterprise may suffer. The selection of security technology is based on the efficiency of selected security measures. Economic metrics are applied for the efficiency assessment and comparative analysis of different protection technologies. Unlike the existing models for evaluation of the security investment, the proposed model allows direct comparison and quantitative assessment of different security measures. The model allows deep analyses and computations providing quantitative assessments of different options for investments, which translate into recommendations facilitating the selection of the best solution and the decision-making thereof. The model was tested using empirical examples with data from real business environment.

  18. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model.

    Science.gov (United States)

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S; Breen, Lauren J; Witt, Regina R; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care.

  19. A THEORETICAL MODEL OF SUPPORTING OPEN SOURCE FRONT END INNOVATION THROUGH IDEA MANAGEMENT

    DEFF Research Database (Denmark)

    Aagaard, Annabeth

    2013-01-01

    to overcome these various challenges companies are looking for new models to support FEI. This theoretical paper explores in what way idea management may be applied as a tool in facilitation of front end innovation and how this facilitation may be captured in a conceptual model. First, I show through...... a literature study, how idea management and front end innovation are related and how they may support each other. Secondly, I present a theoretical model of how idea management may be applied in support of the open source front end of new product innovations. Thirdly, I present different venues of further...... exploration of active facilitation of open source front end innovation through idea management....

  20. On the usability of quantitative modelling in operations strategy decission making

    NARCIS (Netherlands)

    Akkermans, H.A.; Bertrand, J.W.M.

    1997-01-01

    Quantitative modelling seems admirably suited to help managers in their strategic decision making on operations management issues, but in practice models are rarely used for this purpose. Investigates the reasons why, based on a detailed cross-case analysis of six cases of modelling-supported

  1. Meta-Theoretical Contributions to the Constitution of a Model-Based Didactics of Science

    Science.gov (United States)

    Ariza, Yefrin; Lorenzano, Pablo; Adúriz-Bravo, Agustín

    2016-10-01

    There is nowadays consensus in the community of didactics of science (i.e. science education understood as an academic discipline) regarding the need to include the philosophy of science in didactical research, science teacher education, curriculum design, and the practice of science education in all educational levels. Some authors have identified an ever-increasing use of the concept of `theoretical model', stemming from the so-called semantic view of scientific theories. However, it can be recognised that, in didactics of science, there are over-simplified transpositions of the idea of model (and of other meta-theoretical ideas). In this sense, contemporary philosophy of science is often blurred or distorted in the science education literature. In this paper, we address the discussion around some meta-theoretical concepts that are introduced into didactics of science due to their perceived educational value. We argue for the existence of a `semantic family', and we characterise four different versions of semantic views existing within the family. In particular, we seek to contribute to establishing a model-based didactics of science mainly supported in this semantic family.

  2. Quantitative evaluation of flow systems, groundwater recharge and transmissivities using environmental tracers

    Energy Technology Data Exchange (ETDEWEB)

    Adar, E M [Ben-Gurion Univ. of Negev, Sede Boker Campus (Israel). Water Resources Center

    1996-10-01

    This chapter provides an overview of the basic concepts and formulations on the compartmental (mixing-cell) approach for interpretation of isotope and natural tracer data to arrive at quantitative estimates related to groundwater systems. The theoretical basis of the models and the specific solution algorithms used are described. The application of this approach to field cases are described as illustrative examples. Results of sensitivity analyses of the model to different parameters are provided. (author). 81 refs, 13 figs, 8 tabs.

  3. Quantitative evaluation of flow systems, groundwater recharge and transmissivities using environmental tracers

    International Nuclear Information System (INIS)

    Adar, E.M.

    1996-01-01

    This chapter provides an overview of the basic concepts and formulations on the compartmental (mixing-cell) approach for interpretation of isotope and natural tracer data to arrive at quantitative estimates related to groundwater systems. The theoretical basis of the models and the specific solution algorithms used are described. The application of this approach to field cases are described as illustrative examples. Results of sensitivity analyses of the model to different parameters are provided. (author). 81 refs, 13 figs, 8 tabs

  4. Theoretical model for plasma expansion generated by hypervelocity impact

    International Nuclear Information System (INIS)

    Ju, Yuanyuan; Zhang, Qingming; Zhang, Dongjiang; Long, Renrong; Chen, Li; Huang, Fenglei; Gong, Zizheng

    2014-01-01

    The hypervelocity impact experiments of spherical LY12 aluminum projectile diameter of 6.4 mm on LY12 aluminum target thickness of 23 mm have been conducted using a two-stage light gas gun. The impact velocity of the projectile is 5.2, 5.7, and 6.3 km/s, respectively. The experimental results show that the plasma phase transition appears under the current experiment conditions, and the plasma expansion consists of accumulation, equilibrium, and attenuation. The plasma characteristic parameters decrease as the plasma expands outward and are proportional with the third power of the impact velocity, i.e., (T e , n e ) ∝ v p 3 . Based on the experimental results, a theoretical model on the plasma expansion is developed and the theoretical results are consistent with the experimental data

  5. Theoretical model for plasma expansion generated by hypervelocity impact

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Yuanyuan; Zhang, Qingming, E-mail: qmzhang@bit.edu.cn; Zhang, Dongjiang; Long, Renrong; Chen, Li; Huang, Fenglei [State Key Laboratory of Explosion Science and Technology, Beijing Institute of Technology, Beijing 100081 (China); Gong, Zizheng [National Key Laboratory of Science and Technology on Reliability and Environment Engineering, Beijing Institute of Spacecraft Environment Engineering, Beijing 100094 (China)

    2014-09-15

    The hypervelocity impact experiments of spherical LY12 aluminum projectile diameter of 6.4 mm on LY12 aluminum target thickness of 23 mm have been conducted using a two-stage light gas gun. The impact velocity of the projectile is 5.2, 5.7, and 6.3 km/s, respectively. The experimental results show that the plasma phase transition appears under the current experiment conditions, and the plasma expansion consists of accumulation, equilibrium, and attenuation. The plasma characteristic parameters decrease as the plasma expands outward and are proportional with the third power of the impact velocity, i.e., (T{sub e}, n{sub e}) ∝ v{sub p}{sup 3}. Based on the experimental results, a theoretical model on the plasma expansion is developed and the theoretical results are consistent with the experimental data.

  6. STRUCTURAL AND METHODICAL MODEL OF INCREASING THE LEVEL OF THEORETICAL TRAINING OF CADETS USING INFORMATION AND COMMUNICATION TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    Vladislav V. Bulgakov

    2018-03-01

    Full Text Available Features of training in higher educational institutions of system of EMERCOM of Russia demand introduction of the new educational techniques and the technical means directed on intensification of educational process, providing an opportunity of preparation of cadets at any time in the independent mode and improving quality of their theoretical knowledge. The authors have developed a structural and methodological model of increasing the level of theoretical training of cadets using information and communication technologies. The proposed structural and methodological model that includes elements to stimulate and enhance cognitive activity, allows you to generate the trajectory of theoretical training of cadets for the entire period of study at the University, to organize a systematic independent work, objective, current and final control of theoretical knowledge. The structural and methodological model for improving the level of theoretical training consists of three main elements: the base of theoretical questions, functional modules "teacher" and "cadet". The basis of the structural and methodological model of increasing the level of theoretical training of cadets is the base of theoretical issues, developed in all disciplines specialty 20.05.01 – fire safety. The functional module "teacher" allows you to create theoretical questions of various kinds, edit questions and delete them from the database if necessary, as well as create tests and monitor their implementation. The functional module "cadet" provides ample opportunities for theoretical training through independent work, testing for current and final control, the implementation of the game form of training in the form of a duel, as well as for the formation of the results of the cadets in the form of statistics and rankings. Structural and methodical model of increasing the level of theoretical training of cadets is implemented in practice in the form of a multi-level automated system

  7. FDTD-based quantitative analysis of terahertz wave detection for multilayered structures.

    Science.gov (United States)

    Tu, Wanli; Zhong, Shuncong; Shen, Yaochun; Zhou, Qing; Yao, Ligang

    2014-10-01

    Experimental investigations have shown that terahertz pulsed imaging (TPI) is able to quantitatively characterize a range of multilayered media (e.g., biological issues, pharmaceutical tablet coatings, layered polymer composites, etc.). Advanced modeling of the interaction of terahertz radiation with a multilayered medium is required to enable the wide application of terahertz technology in a number of emerging fields, including nondestructive testing. Indeed, there have already been many theoretical analyses performed on the propagation of terahertz radiation in various multilayered media. However, to date, most of these studies used 1D or 2D models, and the dispersive nature of the dielectric layers was not considered or was simplified. In the present work, the theoretical framework of using terahertz waves for the quantitative characterization of multilayered media was established. A 3D model based on the finite difference time domain (FDTD) method is proposed. A batch of pharmaceutical tablets with a single coating layer of different coating thicknesses and different refractive indices was modeled. The reflected terahertz wave from such a sample was computed using the FDTD method, assuming that the incident terahertz wave is broadband, covering a frequency range up to 3.5 THz. The simulated results for all of the pharmaceutical-coated tablets considered were found to be in good agreement with the experimental results obtained using a commercial TPI system. In addition, we studied a three-layered medium to mimic the occurrence of defects in the sample.

  8. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  9. Control Theoretic Modeling and Generated Flow Patterns of a Fish-Tail Robot

    Science.gov (United States)

    Massey, Brian; Morgansen, Kristi; Dabiri, Dana

    2003-11-01

    Many real-world engineering problems involve understanding and manipulating fluid flows. One of the challenges to further progress in the area of active flow control is the lack of appropriate models that are amenable to control-theoretic studies and algorithm design and also incorporate reasonably realistic fluid dynamic effects. We focus here on modeling and model-verification of bio-inspired actuators (fish-fin type structures) used to control fluid dynamic artifacts that will affect speed, agility, and stealth of Underwater Autonomous Vehicles (UAVs). Vehicles using fish-tail type systems are more maneuverable, can turn in much shorter and more constrained spaces, have lower drag, are quieter and potentially more efficient than those using propellers. We will present control-theoretic models for a simple prototype coupled fluid and mechanical actuator where fluid effects are crudely modeled by assuming only lift, drag, and added mass, while neglecting boundary effects. These models will be tested with different control input parameters on an experimental fish-tail robot with the resulting flow captured with DPIV. Relations between the model, the control function choices, the obtained thrust and drag, and the corresponding flow patterns will be presented and discussed.

  10. "Why am i a volunteer?": building a quantitative scale

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo Cavalcante

    Full Text Available This paper aims to analyze the validity of a quantitative instrument to identify what attracts someone to volunteer work, as well as what makes them stay and what makes them quit such an activity. The theoretical framework lists aspects related to volunteer work, which is followed by a discussion on models of analysis of volunteer motivation. As to the objectives, this research is descriptive, since it presents the analysis of the validity of a quantitative instrument that seeks to understand and describe the reasons for volunteering at the Pastoral da Criança, a Brazilian NGO. This instrument is based on theoretical ideas by Souza, Medeiros and Fernandes (2006. Reliability - Cronbach's Alpha - reached values between 0.7 and 0.8. Regarding Kaiser-Meyer-Olkin measure of sampling adequacy a good index was also obtained: 0.74. Despite the good results of reliability and sampling adequacy of factor analysis, none of the variables resulted in the expected combination, namely: indicators versus profile. It is necessary to improve the semantic meaning of certain factors, or even increase the number of indicators so as to generate additional correlations among them.

  11. Theoretical Modeling of Magnesium Ion Imprints in the Raman Scattering of Water

    Czech Academy of Sciences Publication Activity Database

    Kapitán, J.; Dračínský, Martin; Kaminský, Jakub; Benda, Ladislav; Bouř, Petr

    2010-01-01

    Roč. 114, č. 10 (2010), s. 3574-3582 ISSN 1520-6106 R&D Projects: GA ČR GA202/07/0732; GA AV ČR IAA400550702; GA AV ČR IAA400550701; GA ČR GPP208/10/P356 Grant - others:AV ČR(CZ) M200550902 Institutional research plan: CEZ:AV0Z40550506 Keywords : Raman spectroscopy * theoretical modelling * CPMD Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.603, year: 2010

  12. Tesla Coil Theoretical Model and its Experimental Verification

    OpenAIRE

    Voitkans Janis; Voitkans Arnis

    2014-01-01

    In this paper a theoretical model of Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wire form, where the line voltage is measured across electrically neutral space. By applying the principle of equivalence of single-wire and two-wire schemes an equivalent two-wire scheme can be found for a single-wire scheme and the already known long line theory can be applied to the Tesla coil. A new method of multiple re...

  13. Improving the theoretical foundations of the multi-mode transport model

    International Nuclear Information System (INIS)

    Bateman, G.; Kritz, A.H.; Redd, A.J.; Erba, M.; Rewoldt, G.; Weiland, J.; Strand, P.; Kinsey, J.E.; Scott, B.

    1999-01-01

    A new version of the Multi-Mode transport model, designated MMM98, is being developed with improved theoretical foundations, in an ongoing effort to predict the temperature and density profiles in tokamaks. For transport near the edge of the plasma, MMM98 uses a new model based on 3-D nonlinear simulations of drift Alfven mode turbulence. Flow shear stabilization effects have been added to the Weiland model for Ion Temperature Gradient and Trapped Electron Modes, which usually dominates in most of the plasma core. For transport near the magnetic axis at high beta, a new kinetic ballooning mode model has been constructed based on FULL stability code computations. (author)

  14. Improving the theoretical foundations of the multi-mode transport model

    International Nuclear Information System (INIS)

    Bateman, G.; Kritz, A.H.; Redd, A.J.; Erba, M.; Rewoldt, G.; Weiland, J.; Strand, P.; Kinsey, J.E.; Scott, B.

    2001-01-01

    A new version of the Multi-Mode transport model, designated MMM98, is being developed with improved theoretical foundations, in an ongoing effort to predict the temperature and density profiles in tokamaks. For transport near the edge of the plasma, MMM98 uses a new model based on 3-D nonlinear simulations of drift Alfven mode turbulence. Flow shear stabilization effects have been added to the Weiland model for Ion Temperature Gradient and Trapped Electron Modes, which usually dominates in most of the plasma core. For transport near the magnetic axis at high beta, a new kinetic ballooning mode model has been constructed based on FULL stability code computations. (author)

  15. Tesla Coil Theoretical Model and its Experimental Verification

    Directory of Open Access Journals (Sweden)

    Voitkans Janis

    2014-12-01

    Full Text Available In this paper a theoretical model of Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wire form, where the line voltage is measured across electrically neutral space. By applying the principle of equivalence of single-wire and two-wire schemes an equivalent two-wire scheme can be found for a single-wire scheme and the already known long line theory can be applied to the Tesla coil. A new method of multiple reflections is developed to characterize a signal in a long line. Formulas for calculation of voltage in Tesla coil by coordinate and calculation of resonance frequencies are proposed. The theoretical calculations are verified experimentally. Resonance frequencies of Tesla coil are measured and voltage standing wave characteristics are obtained for different output capacities in the single-wire mode. Wave resistance and phase coefficient of Tesla coil is obtained. Experimental measurements show good compliance with the proposed theory. The formulas obtained in this paper are also usable for a regular two-wire long line with distributed parameters.

  16. Achievement Goals and Discrete Achievement Emotions: A Theoretical Model and Prospective Test

    Science.gov (United States)

    Pekrun, Reinhard; Elliot, Andrew J.; Maier, Markus A.

    2006-01-01

    A theoretical model linking achievement goals to discrete achievement emotions is proposed. The model posits relations between the goals of the trichotomous achievement goal framework and 8 commonly experienced achievement emotions organized in a 2 (activity/outcome focus) x 2 (positive/negative valence) taxonomy. Two prospective studies tested…

  17. Quantitative analysis of a wind energy conversion model

    International Nuclear Information System (INIS)

    Zucker, Florian; Gräbner, Anna; Strunz, Andreas; Meyn, Jan-Peter

    2015-01-01

    A rotor of 12 cm diameter is attached to a precision electric motor, used as a generator, to make a model wind turbine. Output power of the generator is measured in a wind tunnel with up to 15 m s −1 air velocity. The maximum power is 3.4 W, the power conversion factor from kinetic to electric energy is c p = 0.15. The v 3 power law is confirmed. The model illustrates several technically important features of industrial wind turbines quantitatively. (paper)

  18. Quantitative Systems Pharmacology: A Case for Disease Models.

    Science.gov (United States)

    Musante, C J; Ramanujan, S; Schmidt, B J; Ghobrial, O G; Lu, J; Heatherington, A C

    2017-01-01

    Quantitative systems pharmacology (QSP) has emerged as an innovative approach in model-informed drug discovery and development, supporting program decisions from exploratory research through late-stage clinical trials. In this commentary, we discuss the unique value of disease-scale "platform" QSP models that are amenable to reuse and repurposing to support diverse clinical decisions in ways distinct from other pharmacometrics strategies. © 2016 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of The American Society for Clinical Pharmacology and Therapeutics.

  19. Theoretical modeling and experimental analyses of laminated wood composite poles

    Science.gov (United States)

    Cheng Piao; Todd F. Shupe; Vijaya Gopu; Chung Y. Hse

    2005-01-01

    Wood laminated composite poles consist of trapezoid-shaped wood strips bonded with synthetic resin. The thick-walled hollow poles had adequate strength and stiffness properties and were a promising substitute for solid wood poles. It was necessary to develop theoretical models to facilitate the manufacture and future installation and maintenance of this novel...

  20. Organizational Learning and Product Design Management: Towards a Theoretical Model.

    Science.gov (United States)

    Chiva-Gomez, Ricardo; Camison-Zornoza, Cesar; Lapiedra-Alcami, Rafael

    2003-01-01

    Case studies of four Spanish ceramics companies were used to construct a theoretical model of 14 factors essential to organizational learning. One set of factors is related to the conceptual-analytical phase of the product design process and the other to the creative-technical phase. All factors contributed to efficient product design management…

  1. Theoretical Models of Deliberative Democracy: A Critical Analysis

    Directory of Open Access Journals (Sweden)

    Tutui Viorel

    2015-07-01

    Full Text Available Abstract: My paper focuses on presenting and analyzing some of the most important theoretical models of deliberative democracy and to emphasize their limits. Firstly, I will mention James Fishkin‟s account of deliberative democracy and its relations with other democratic models. He differentiates between four democratic theories: competitive democracy, elite deliberation, participatory democracy and deliberative democracy. Each of these theories makes an explicit commitment to two of the following four “principles”: political equality, participation, deliberation, nontyranny. Deliberative democracy is committed to political equality and deliberation. Secondly, I will present Philip Pettit‟s view concerning the main constraints of deliberative democracy: the inclusion constraint, the judgmental constraint and the dialogical constraint. Thirdly, I will refer to Amy Gutmann and Dennis Thompson‟s conception regarding the “requirements” or characteristics of deliberative democracy: the reason-giving requirement, the accessibility of reasons, the binding character of the decisions and the dynamic nature of the deliberative process. Finally, I will discuss Joshua Cohen‟s “ideal deliberative procedure” which has the following features: it is free, reasoned, the parties are substantively equal and the procedure aims to arrive at rationally motivated consensus. After presenting these models I will provide a critical analysis of each one of them with the purpose of revealing their virtues and limits. I will make some suggestions in order to combine the virtues of these models, to transcend their limitations and to offer a more systematical account of deliberative democracy. In the next four sections I will take into consideration four main strategies for combining political and epistemic values (“optimistic”, “deliberative”, “democratic” and “pragmatic” and the main objections they have to face. In the concluding section

  2. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  3. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    Science.gov (United States)

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  4. Regional differences of outpatient physician supply as a theoretical economic and empirical generalized linear model.

    Science.gov (United States)

    Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang

    2015-11-17

    Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.

  5. 4. Valorizations of Theoretical Models of Giftedness and Talent in Defining of Artistic Talent

    OpenAIRE

    Anghel Ionica Ona

    2016-01-01

    Artistic talent has been defined in various contexts and registers a variety of meanings, more or less operational. From the perspective of pedagogical intervention, it is imperative understanding artistic talent trough the theoretical models of giftedness and talent. So, the aim of the study is to realize a review of the most popular of the theoretical models of giftedness and talent, with identification of the place of artistic talent and the new meanings that artistic talent has in each on...

  6. Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity

    International Nuclear Information System (INIS)

    Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J

    2008-01-01

    A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle

  7. Information-Theoretic Perspectives on Geophysical Models

    Science.gov (United States)

    Nearing, Grey

    2016-04-01

    To test any hypothesis about any dynamic system, it is necessary to build a model that places that hypothesis into the context of everything else that we know about the system: initial and boundary conditions and interactions between various governing processes (Hempel and Oppenheim, 1948, Cartwright, 1983). No hypothesis can be tested in isolation, and no hypothesis can be tested without a model (for a geoscience-related discussion see Clark et al., 2011). Science is (currently) fundamentally reductionist in the sense that we seek some small set of governing principles that can explain all phenomena in the universe, and such laws are ontological in the sense that they describe the object under investigation (Davies, 1990 gives several competing perspectives on this claim). However, since we cannot build perfect models of complex systems, any model that does not also contain an epistemological component (i.e., a statement, like a probability distribution, that refers directly to the quality of of the information from the model) is falsified immediately (in the sense of Popper, 2002) given only a small number of observations. Models necessarily contain both ontological and epistemological components, and what this means is that the purpose of any robust scientific method is to measure the amount and quality of information provided by models. I believe that any viable philosophy of science must be reducible to this statement. The first step toward a unified theory of scientific models (and therefore a complete philosophy of science) is a quantitative language that applies to both ontological and epistemological questions. Information theory is one such language: Cox' (1946) theorem (see Van Horn, 2003) tells us that probability theory is the (only) calculus that is consistent with Classical Logic (Jaynes, 2003; chapter 1), and information theory is simply the integration of convex transforms of probability ratios (integration reduces density functions to scalar

  8. A Primer on Theoretically Exploring the Field of Business Model Innovation

    OpenAIRE

    Gassmann, Oliver; Frankenberger, Karolin; Sauer, Roman

    2017-01-01

    Companies like Amazon, Uber, and Skype have become business strategy icons and the way they transformed industries can hardly be explained with classic strategy research. This article explores the topic of Business Model Innovation, which has become the cornerstone for the competitiveness of many successful firms, from a theoretical perspective. It gives an overview and introduction to the book "Exploring the Field of Business Model Innovation".

  9. Theoretical analysis of magnetic sensor output voltage

    International Nuclear Information System (INIS)

    Liu Haishun; Dun Chaochao; Dou Linming; Yang Weiming

    2011-01-01

    The output voltage is an important parameter to determine the stress state in magnetic stress measurement, the relationship between the output voltage and the difference in the principal stresses was investigated by a comprehensive application of magnetic circuit theory, magnetization theory, stress analysis as well as the law of electromagnetic induction, and a corresponding quantitative equation was derived. It is drawn that the output voltage is proportional to the difference in the principal stresses, and related to the angle between the principal stress and the direction of the sensor. This investigation provides a theoretical basis for the principle stresses measurement by output voltage. - Research highlights: → A comprehensive investigation of magnetic stress signal. → Derived a quantitative equation about output voltage and the principal stresses. → The output voltage is proportional to the difference of the principal stresses. → Provide a theoretical basis for the principle stresses measurement.

  10. Quantitative MFM on superconducting thin films

    Energy Technology Data Exchange (ETDEWEB)

    Stopfel, Henry; Vock, Silvia; Shapoval, Tetyana; Neu, Volker; Wolff, Ulrike; Haindl, Silvia; Engelmann, Jan; Schaefer, Rudolf; Holzapfel, Bernhard; Schultz, Ludwig [IFW Dresden, Institute for Metallic Material (Germany); Inosov, Dmytro S. [Max Planck Institute for Solid State Research, Stuttgart (Germany)

    2012-07-01

    Quantitative interpretation of magnetic force microscopy (MFM) data is a challenge, because the measured signal is a convolution between the magnetization of the tip and the stray field emanated by the sample. It was established theoretically that the field distribution just above the surface of the superconductor can be well approximated by the stray field of a magnetic monopole. The description of the MFM tip, however, needs a second approximation. The temperature-dependent vortex-distribution images on a NbN thin film were fitted using two different tip models. Firstly, the magnetic tip was assumed to be a monopole that leads to the simple monopole-monopole model for the tip-sample interaction force. Performing a 2D fitting of the data with this model, we extracted λ, Δ and the vortex pinning force. Secondly, a geometrical model was applied to calculate the tip-transfer-function of the MFM tip using the numerical BEM method.

  11. Evaluating the Theoretic Adequacy and Applied Potential of Computational Models of the Spacing Effect.

    Science.gov (United States)

    Walsh, Matthew M; Gluck, Kevin A; Gunzelmann, Glenn; Jastrzembski, Tiffany; Krusmark, Michael

    2018-03-02

    The spacing effect is among the most widely replicated empirical phenomena in the learning sciences, and its relevance to education and training is readily apparent. Yet successful applications of spacing effect research to education and training is rare. Computational modeling can provide the crucial link between a century of accumulated experimental data on the spacing effect and the emerging interest in using that research to enable adaptive instruction. In this paper, we review relevant literature and identify 10 criteria for rigorously evaluating computational models of the spacing effect. Five relate to evaluating the theoretic adequacy of a model, and five relate to evaluating its application potential. We use these criteria to evaluate a novel computational model of the spacing effect called the Predictive Performance Equation (PPE). Predictive Performance Equation combines elements of earlier models of learning and memory including the General Performance Equation, Adaptive Control of Thought-Rational, and the New Theory of Disuse, giving rise to a novel computational account of the spacing effect that performs favorably across the complete sets of theoretic and applied criteria. We implemented two other previously published computational models of the spacing effect and compare them to PPE using the theoretic and applied criteria as guides. © 2018 Cognitive Science Society, Inc.

  12. Improving statistical reasoning theoretical models and practical implications

    CERN Document Server

    Sedlmeier, Peter

    1999-01-01

    This book focuses on how statistical reasoning works and on training programs that can exploit people''s natural cognitive capabilities to improve their statistical reasoning. Training programs that take into account findings from evolutionary psychology and instructional theory are shown to have substantially larger effects that are more stable over time than previous training regimens. The theoretical implications are traced in a neural network model of human performance on statistical reasoning problems. This book apppeals to judgment and decision making researchers and other cognitive scientists, as well as to teachers of statistics and probabilistic reasoning.

  13. Predicting Freshman Persistence and Voluntary Dropout Decisions from a Theoretical Model.

    Science.gov (United States)

    Pascarella, Ernest T.; Terenzini, Patrick T.

    1980-01-01

    A five-scale instrument developed from a theoretical model of college attrition correctly identified the persistence/voluntary withdrawal decisions of 78.5 percent of 773 freshmen in a large, residential university. Findings showed that student relationships with faculty were particularly important. (Author/PHR)

  14. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model

    OpenAIRE

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S.; Breen, Lauren J.; Witt, Regina R.; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of p...

  15. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    Science.gov (United States)

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  16. Field-theoretic approach to fluctuation effects in neural networks

    International Nuclear Information System (INIS)

    Buice, Michael A.; Cowan, Jack D.

    2007-01-01

    A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governed by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience

  17. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  18. Symbolic interactionism as a theoretical perspective for multiple method research.

    Science.gov (United States)

    Benzies, K M; Allen, M N

    2001-02-01

    Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.

  19. Theoretical Assessment of the Impact of Climatic Factors in a Vibrio Cholerae Model.

    Science.gov (United States)

    Kolaye, G; Damakoa, I; Bowong, S; Houe, R; Békollè, D

    2018-05-04

    A mathematical model for Vibrio Cholerae (V. Cholerae) in a closed environment is considered, with the aim of investigating the impact of climatic factors which exerts a direct influence on the bacterial metabolism and on the bacterial reservoir capacity. We first propose a V. Cholerae mathematical model in a closed environment. A sensitivity analysis using the eFast method was performed to show the most important parameters of the model. After, we extend this V. cholerae model by taking account climatic factors that influence the bacterial reservoir capacity. We present the theoretical analysis of the model. More precisely, we compute equilibria and study their stabilities. The stability of equilibria was investigated using the theory of periodic cooperative systems with a concave nonlinearity. Theoretical results are supported by numerical simulations which further suggest the necessity to implement sanitation campaigns of aquatic environments by using suitable products against the bacteria during the periods of growth of aquatic reservoirs.

  20. Theoretical modelling of photoactive molecular systems: insights using the Density Functional Theory

    Energy Technology Data Exchange (ETDEWEB)

    Ciofini, I.; Adamo, C. [Ecole Nationale Superieure de Chimie de Paris, Lab. d' Electrochimie et Chimie Analytique, CNRS UMR 7575, 75 - Paris (France); Laine, Ph.P. [Universite Rene-Descartes, Lab. de Chimie et Biochimie Pharmacologiques et Toxicologiques, CNRS UMR 8601, 75 - Paris (France); Bedioui, F. [Ecole Nationale Superieure de Chimie de Paris, Lab. de Pharmacologie Chimique et Genetique, CNRS FRE 2463 and INSERM U 640, 75 - Paris (France); Daul, C.A. [Fribourg Univ., Dept. de Chimie (Switzerland)

    2006-02-15

    An account of the performance of a modern and efficient approach to Density Functional Theory (DFT) for the prediction of the photophysical behavior of a series of Ru(II) and Os(II) complexes is given. The time-dependent-DFT method was used to interpret their electronic spectra. Two different types of compounds have been analyzed: (1) a complex undergoing a light induced isomerization of one of its coordination bonds; (2) an inorganic dyads expected to undergo intramolecular photoinduced electron transfer to form a charge separated (CS) sate. Besides the noticeable quantitative agreement between computed and experimental absorption spectra, our results allow to clarify, by first principles, both the nature of the excited states and the photochemical behavior of these complex systems, thus underlying the predictive character of the theoretical approach. (authors)

  1. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Science.gov (United States)

    2012-07-17

    ... models to generate quantitative estimates of the benefits and risks of influenza vaccination. The public...] Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A... Influenza Disease Models to Quantitatively Evaluate the Benefits and Risks of Vaccines: A Technical Workshop...

  2. Game Theoretic Modeling of Water Resources Allocation Under Hydro-Climatic Uncertainty

    Science.gov (United States)

    Brown, C.; Lall, U.; Siegfried, T.

    2005-12-01

    Typical hydrologic and economic modeling approaches rely on assumptions of climate stationarity and economic conditions of ideal markets and rational decision-makers. In this study, we incorporate hydroclimatic variability with a game theoretic approach to simulate and evaluate common water allocation paradigms. Game Theory may be particularly appropriate for modeling water allocation decisions. First, a game theoretic approach allows economic analysis in situations where price theory doesn't apply, which is typically the case in water resources where markets are thin, players are few, and rules of exchange are highly constrained by legal or cultural traditions. Previous studies confirm that game theory is applicable to water resources decision problems, yet applications and modeling based on these principles is only rarely observed in the literature. Second, there are numerous existing theoretical and empirical studies of specific games and human behavior that may be applied in the development of predictive water allocation models. With this framework, one can evaluate alternative orderings and rules regarding the fraction of available water that one is allowed to appropriate. Specific attributes of the players involved in water resources management complicate the determination of solutions to game theory models. While an analytical approach will be useful for providing general insights, the variety of preference structures of individual players in a realistic water scenario will likely require a simulation approach. We propose a simulation approach incorporating the rationality, self-interest and equilibrium concepts of game theory with an agent-based modeling framework that allows the distinct properties of each player to be expressed and allows the performance of the system to manifest the integrative effect of these factors. Underlying this framework, we apply a realistic representation of spatio-temporal hydrologic variability and incorporate the impact of

  3. Multiscale modeling of complex materials phenomenological, theoretical and computational aspects

    CERN Document Server

    Trovalusci, Patrizia

    2014-01-01

    The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.

  4. Expert judgement models in quantitative risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Rosqvist, T. [VTT Automation, Helsinki (Finland); Tuominen, R. [VTT Automation, Tampere (Finland)

    1999-12-01

    Expert judgement is a valuable source of information in risk management. Especially, risk-based decision making relies significantly on quantitative risk assessment, which requires numerical data describing the initiator event frequencies and conditional probabilities in the risk model. This data is seldom found in databases and has to be elicited from qualified experts. In this report, we discuss some modelling approaches to expert judgement in risk modelling. A classical and a Bayesian expert model is presented and applied to real case expert judgement data. The cornerstone in the models is the log-normal distribution, which is argued to be a satisfactory choice for modelling degree-of-belief type probability distributions with respect to the unknown parameters in a risk model. Expert judgements are qualified according to bias, dispersion, and dependency, which are treated differently in the classical and Bayesian approaches. The differences are pointed out and related to the application task. Differences in the results obtained from the different approaches, as applied to real case expert judgement data, are discussed. Also, the role of a degree-of-belief type probability in risk decision making is discussed.

  5. Expert judgement models in quantitative risk assessment

    International Nuclear Information System (INIS)

    Rosqvist, T.; Tuominen, R.

    1999-01-01

    Expert judgement is a valuable source of information in risk management. Especially, risk-based decision making relies significantly on quantitative risk assessment, which requires numerical data describing the initiator event frequencies and conditional probabilities in the risk model. This data is seldom found in databases and has to be elicited from qualified experts. In this report, we discuss some modelling approaches to expert judgement in risk modelling. A classical and a Bayesian expert model is presented and applied to real case expert judgement data. The cornerstone in the models is the log-normal distribution, which is argued to be a satisfactory choice for modelling degree-of-belief type probability distributions with respect to the unknown parameters in a risk model. Expert judgements are qualified according to bias, dispersion, and dependency, which are treated differently in the classical and Bayesian approaches. The differences are pointed out and related to the application task. Differences in the results obtained from the different approaches, as applied to real case expert judgement data, are discussed. Also, the role of a degree-of-belief type probability in risk decision making is discussed

  6. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    Science.gov (United States)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  7. Theoretical-empirical model of the steam-water cycle of the power unit

    Directory of Open Access Journals (Sweden)

    Grzegorz Szapajko

    2010-06-01

    Full Text Available The diagnostics of the energy conversion systems’ operation is realised as a result of collecting, processing, evaluatingand analysing the measurement signals. The result of the analysis is the determination of the process state. It requires a usageof the thermal processes models. Construction of the analytical model with the auxiliary empirical functions built-in brings satisfyingresults. The paper presents theoretical-empirical model of the steam-water cycle. Worked out mathematical simulation model containspartial models of the turbine, the regenerative heat exchangers and the condenser. Statistical verification of the model is presented.

  8. Quantitative model of New Zealand's energy supply industry

    Energy Technology Data Exchange (ETDEWEB)

    Smith, B. R. [Victoria Univ., Wellington, (New Zealand); Lucas, P. D. [Ministry of Energy Resources (New Zealand)

    1977-10-15

    A mathematical model is presented to assist in an analysis of energy policy options available. The model is based on an engineering orientated description of New Zealand's energy supply and distribution system. The system is cast as a linear program, in which energy demand is satisfied at least cost. The capacities and operating modes of process plant (such as power stations, oil refinery units, and LP-gas extraction plants) are determined by the model, as well as the optimal mix of fuels supplied to the final consumers. Policy analysis with the model enables a wide ranging assessment of the alternatives and uncertainties within a consistent quantitative framework. It is intended that the model be used as a tool to investigate the relative effects of various policy options, rather than to present a definitive plan for satisfying the nation's energy requirements.

  9. A general mixture model for mapping quantitative trait loci by using molecular markers

    NARCIS (Netherlands)

    Jansen, R.C.

    1992-01-01

    In a segregating population a quantitative trait may be considered to follow a mixture of (normal) distributions, the mixing proportions being based on Mendelian segregation rules. A general and flexible mixture model is proposed for mapping quantitative trait loci (QTLs) by using molecular markers.

  10. Theoretical models of DNA flexibility

    Czech Academy of Sciences Publication Activity Database

    Dršata, Tomáš; Lankaš, Filip

    2013-01-01

    Roč. 3, č. 4 (2013), s. 355-363 ISSN 1759-0876 Institutional support: RVO:61388963 Keywords : molecular dynamics simulations * base pair level * indirect readout Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 9.041, year: 2013

  11. A theoretical model of the M87 jet

    International Nuclear Information System (INIS)

    Falle, S.A.E.G.; Wilson, M.J.

    1985-01-01

    This paper describes a theoretical model of the knots in the M87 jet based on the idea that it is a steady fluid jet propagating through a non-uniform atmosphere. It is argued that knots D, E and F can be explained by the jet being underexpanded as it emerges from the central source, while knot A is due to reconfinement of the jet. Very high resolution numerical calculations are used to show that good agreement with the observed positions of the knots can be obtained with reasonable jet parameters and an atmosphere consistent with the X-ray observations. (author)

  12. Theoretical model of the SOS effect

    Energy Technology Data Exchange (ETDEWEB)

    Darznek, S A; Mesyats, G A; Rukin, S N; Tsiranov, S N [Russian Academy of Sciences, Ural Division, Ekaterinburg (Russian Federation). Institute of Electrophysics

    1997-12-31

    Physical principles underlying the operation of semiconductor opening switches (SOS) are highlighted. The SOS effect occurs at a current density of up to 60 kA/cm{sup 2} in silicon p{sup +}-p-n-n{sup +} structures filled with residual electron-hole plasma. Using a theoretical model developed for plasma dynamic calculations, the mechanism by which current passes through the structure at the stage of high conduction and the processes that take place at the stage of current interruption were analyzed. The dynamics of the processes taking place in the structure was calculated with allowance for both diffusive and drift mechanisms of carrier transport. In addition, two recombination types, viz. recombination via impurities and impact Auger recombination, were included in the model. The effect of the structure on the pumping-circuit current and voltage was also taken into account. The real distribution of the doped impurity in the structure and the avalanche mechanism of carrier multiplication were considered. The results of calculations of a typical SOS are presented. The dynamics of the electron-hole plasma is analyzed. It is shown that the SOS effect represents a qualitatively new mechanism of current interruption in semiconductor structures. (author). 4 figs., 7 refs.

  13. A theoretical model of speed-dependent steering torque for rolling tyres

    Science.gov (United States)

    Wei, Yintao; Oertel, Christian; Liu, Yahui; Li, Xuebing

    2016-04-01

    It is well known that the tyre steering torque is highly dependent on the tyre rolling speed. In limited cases, i.e. parking manoeuvre, the steering torque approaches the maximum. With the increasing tyre speed, the steering torque decreased rapidly. Accurate modelling of the speed-dependent behaviour for the tyre steering torque is a key factor to calibrate the electric power steering (EPS) system and tune the handling performance of vehicles. However, no satisfactory theoretical model can be found in the existing literature to explain this phenomenon. This paper proposes a new theoretical framework to model this important tyre behaviour, which includes three key factors: (1) tyre three-dimensional transient rolling kinematics with turn-slip; (2) dynamical force and moment generation; and (3) the mixed Lagrange-Euler method for contact deformation solving. A nonlinear finite-element code has been developed to implement the proposed approach. It can be found that the main mechanism for the speed-dependent steering torque is due to turn-slip-related kinematics. This paper provides a theory to explain the complex mechanism of the tyre steering torque generation, which helps to understand the speed-dependent tyre steering torque, tyre road feeling and EPS calibration.

  14. Theoretical Model of Pricing Behavior on the Polish Wholesale Fuel Market

    Directory of Open Access Journals (Sweden)

    Bejger Sylwester

    2016-12-01

    Full Text Available In this paper, we constructed a theoretical model of strategic pricing behavior of the players in a Polish wholesale fuel market. This model is consistent with the characteristics of the industry, the wholesale market, and the players. The model is based on the standard methodology of repeated games with a built-in adjustment to a focal price, which resembles the Import Parity Pricing (IPP mechanism. From the equilibrium of the game, we conclude that the focal price policy implies a parallel pricing strategic behavior on the market.

  15. Sound transmission through lightweight double-leaf partitions: theoretical modelling

    Science.gov (United States)

    Wang, J.; Lu, T. J.; Woodhouse, J.; Langley, R. S.; Evans, J.

    2005-09-01

    This paper presents theoretical modelling of the sound transmission loss through double-leaf lightweight partitions stiffened with periodically placed studs. First, by assuming that the effect of the studs can be replaced with elastic springs uniformly distributed between the sheathing panels, a simple smeared model is established. Second, periodic structure theory is used to develop a more accurate model taking account of the discrete placing of the studs. Both models treat incident sound waves in the horizontal plane only, for simplicity. The predictions of the two models are compared, to reveal the physical mechanisms determining sound transmission. The smeared model predicts relatively simple behaviour, in which the only conspicuous features are associated with coincidence effects with the two types of structural wave allowed by the partition model, and internal resonances of the air between the panels. In the periodic model, many more features are evident, associated with the structure of pass- and stop-bands for structural waves in the partition. The models are used to explain the effects of incidence angle and of the various system parameters. The predictions are compared with existing test data for steel plates with wooden stiffeners, and good agreement is obtained.

  16. Highly pressurized partially miscible liquid-liquid flow in a micro-T-junction. II. Theoretical justifications and modeling

    Science.gov (United States)

    Qin, Ning; Wen, John Z.; Ren, Carolyn L.

    2017-04-01

    This is the second part of a two-part study on a partially miscible liquid-liquid flow (carbon dioxide and deionized water) that is highly pressurized and confined in a microfluidic T-junction. In the first part of this study, we reported experimental observations of the development of flow regimes under various flow conditions and the quantitative characteristics of the drop flow including the drop length, after-generation drop speed, and periodic spacing development between an emerging drop and the newly produced one. Here in part II we provide theoretical justifications to our quantitative studies on the drop flow by considering (1) C O2 hydration at the interface with water, (2) the diffusion-controlled dissolution of C O2 molecules in water, and (3) the diffusion distance of the dissolved C O2 molecules. Our analyses show that (1) the C O2 hydration at the interface is overall negligible, (2) a saturation scenario of the dissolved C O2 molecules in the vicinity of the interface will not be reached within the contact time between the two fluids, and (3) molecular diffusion does play a role in transferring the dissolved molecules, but the diffusion distance is very limited compared with the channel geometry. In addition, mathematical models for the drop length and the drop spacing are developed based on the observations in part I, and their predictions are compared to our experimental results.

  17. Exploring patient satisfaction predictors in relation to a theoretical model.

    Science.gov (United States)

    Grøndahl, Vigdis Abrahamsen; Hall-Lord, Marie Louise; Karlsson, Ingela; Appelgren, Jari; Wilde-Larsson, Bodil

    2013-01-01

    The aim is to describe patients' care quality perceptions and satisfaction and to explore potential patient satisfaction predictors as person-related conditions, external objective care conditions and patients' perception of actual care received ("PR") in relation to a theoretical model. A cross-sectional design was used. Data were collected using one questionnaire combining questions from four instruments: Quality from patients' perspective; Sense of coherence; Big five personality trait; and Emotional stress reaction questionnaire (ESRQ), together with questions from previous research. In total, 528 patients (83.7 per cent response rate) from eight medical, three surgical and one medical/surgical ward in five Norwegian hospitals participated. Answers from 373 respondents with complete ESRQ questionnaires were analysed. Sequential multiple regression analysis with ESRQ as dependent variable was run in three steps: person-related conditions, external objective care conditions, and PR (p person-related conditions) explained 51.7 per cent of the ESRQ variance. Step 2 (external objective care conditions) explained an additional 2.4 per cent. Step 3 (PR) gave no significant additional explanation (0.05 per cent). Steps 1 and 2 contributed statistical significance to the model. Patients rated both quality-of-care and satisfaction highly. The paper shows that the theoretical model using an emotion-oriented approach to assess patient satisfaction can explain 54 per cent of patient satisfaction in a statistically significant manner.

  18. [Self-Determination in Medical Rehabilitation - Development of a Conceptual Model for Further Theoretical Discussion].

    Science.gov (United States)

    Senin, Tatjana; Meyer, Thorsten

    2018-01-22

    Aim was to gather theoretical knowledge about self-determination and to develop a conceptual model for medical rehabilitation- which serves as a basis for discussion. We performed a literature research in electronic databases. Various theories and research results were adopted and transferred to the context of medical rehabilitation and into a conceptual model. The conceptual model of self-determination reflects on a continuum which forms of self-determination may be present in situations of medical rehabilitation treatments. The location on the continuum depends theoretically on the manifestation of certain internal and external factors that may influence each other. The model provides a first conceptualization of self-determination focusing on medical rehabilitation which should be further refined and tested empirically. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Theoretical model of polar cap auroral arcs

    International Nuclear Information System (INIS)

    Kan, J.R.; Burke, W.J.; USAF, Bedford, MA)

    1985-01-01

    A theory of the polar cap auroral arcs is proposed under the assumption that the magnetic field reconnection occurs in the cusp region on tail field lines during northward interplanetary magnetic field (IMF) conditions. Requirements of a convection model during northward IMF are enumerated based on observations and fundamental theoretical considerations. The theta aurora can be expected to occur on the closed field lines convecting sunward in the central polar cap, while the less intense regular polar cap arcs can occur either on closed or open field lines. The dynamo region for the polar cap arcs is required to be on closed field lines convecting tailward in the plasma sheet which is magnetically connected to the sunward convection in the central polar cap. 43 references

  20. A new approach to developing and optimizing organization strategy based on stochastic quantitative model of strategic performance

    Directory of Open Access Journals (Sweden)

    Marko Hell

    2014-03-01

    Full Text Available This paper presents a highly formalized approach to strategy formulation and optimization of strategic performance through proper resource allocation. A stochastic quantitative model of strategic performance (SQMSP is used to evaluate the efficiency of the strategy developed. The SQMSP follows the theoretical notions of the balanced scorecard (BSC and strategy map methodologies, initially developed by Kaplan and Norton. Parameters of the SQMSP are suggested to be random variables and be evaluated by experts who give two-point (optimistic and pessimistic values and three-point (optimistic, most probable and pessimistic values evaluations. The Monte-Carlo method is used to simulate strategic performance. Having been implemented within a computer application and applied to solve the real problem (planning of an IT-strategy at the Faculty of Economics, University of Split the proposed approach demonstrated its high potential as a basis for development of decision support tools related to strategic planning.

  1. Quantitative modeling of gene networks of biological systems using fuzzy Petri nets and fuzzy sets

    Directory of Open Access Journals (Sweden)

    Raed I. Hamed

    2018-01-01

    Full Text Available Quantitative demonstrating of organic frameworks has turned into an essential computational methodology in the configuration of novel and investigation of existing natural frameworks. Be that as it may, active information that portrays the framework's elements should be known keeping in mind the end goal to get pertinent results with the routine displaying strategies. This information is frequently robust or even difficult to get. Here, we exhibit a model of quantitative fuzzy rational demonstrating approach that can adapt to obscure motor information and hence deliver applicable results despite the fact that dynamic information is fragmented or just dubiously characterized. Besides, the methodology can be utilized as a part of the blend with the current cutting edge quantitative demonstrating strategies just in specific parts of the framework, i.e., where the data are absent. The contextual analysis of the methodology suggested in this paper is performed on the model of nine-quality genes. We propose a kind of FPN model in light of fuzzy sets to manage the quantitative modeling of biological systems. The tests of our model appear that the model is practical and entirely powerful for information impersonation and thinking of fuzzy expert frameworks.

  2. The Empirical Measurement of a Theoretical Concept: Tracing Social Exclusion among Racial Minority and Migrant Groups in Canada

    Directory of Open Access Journals (Sweden)

    Luann Good Gingrich

    2015-07-01

    Full Text Available This paper provides an in-depth description and case application of a conceptual model of social exclusion: aiming to advance existing knowledge on how to conceive of and identify this complex idea, evaluate the methodologies used to measure it, and reconsider what is understood about its social realities toward a meaningful and measurable conception of social inclusion. Drawing on Pierre Bourdieu’s conceptual tools of social fields and systems of capital, our research posits and applies a theoretical framework that permits the measurement of social exclusion as dynamic, social, relational, and material. We begin with a brief review of existing social exclusion research literature, and specifically examine the difficulties and benefits inherent in quantitatively operationalizing a necessarily multifarious theoretical concept. We then introduce our conceptual model of social exclusion and inclusion, which is built on measurable constructs. Using our ongoing program of research as a case study, we briefly present our approach to the quantitative operationalization of social exclusion using secondary data analysis in the Canadian context. Through the development of an Economic Exclusion Index, we demonstrate how our statistical and theoretical analyses evidence intersecting processes of social exclusion which produce consequential gaps and uneven trajectories for migrant individuals and groups compared with Canadian-born, and racial minority groups versus white individuals. To conclude, we consider some methodological implications to advance the empirical measurement of social inclusion.

  3. Nuclear medicine and imaging research (instrumentation and quantitative methods of evaluation)

    International Nuclear Information System (INIS)

    Beck, R.N.; Cooper, M.; Chen, C.T.

    1992-07-01

    This document is the annual progress report for project entitled ''Instrumentation and Quantitative Methods of Evaluation.'' Progress is reported in separate sections individually abstracted and indexed for the database. Subject areas reported include theoretical studies of imaging systems and methods, hardware developments, quantitative methods of evaluation, and knowledge transfer: education in quantitative nuclear medicine imaging

  4. Comparison of conventional, model-based quantitative planar, and quantitative SPECT image processing methods for organ activity estimation using In-111 agents

    International Nuclear Information System (INIS)

    He, Bin; Frey, Eric C

    2006-01-01

    Accurate quantification of organ radionuclide uptake is important for patient-specific dosimetry. The quantitative accuracy from conventional conjugate view methods is limited by overlap of projections from different organs and background activity, and attenuation and scatter. In this work, we propose and validate a quantitative planar (QPlanar) processing method based on maximum likelihood (ML) estimation of organ activities using 3D organ VOIs and a projector that models the image degrading effects. Both a physical phantom experiment and Monte Carlo simulation (MCS) studies were used to evaluate the new method. In these studies, the accuracies and precisions of organ activity estimates for the QPlanar method were compared with those from conventional planar (CPlanar) processing methods with various corrections for scatter, attenuation and organ overlap, and a quantitative SPECT (QSPECT) processing method. Experimental planar and SPECT projections and registered CT data from an RSD Torso phantom were obtained using a GE Millenium VH/Hawkeye system. The MCS data were obtained from the 3D NCAT phantom with organ activity distributions that modelled the uptake of 111 In ibritumomab tiuxetan. The simulations were performed using parameters appropriate for the same system used in the RSD torso phantom experiment. The organ activity estimates obtained from the CPlanar, QPlanar and QSPECT methods from both experiments were compared. From the results of the MCS experiment, even with ideal organ overlap correction and background subtraction, CPlanar methods provided limited quantitative accuracy. The QPlanar method with accurate modelling of the physical factors increased the quantitative accuracy at the cost of requiring estimates of the organ VOIs in 3D. The accuracy of QPlanar approached that of QSPECT, but required much less acquisition and computation time. Similar results were obtained from the physical phantom experiment. We conclude that the QPlanar method, based

  5. Physics of human cooperation: experimental evidence and theoretical models

    Science.gov (United States)

    Sánchez, Angel

    2018-02-01

    In recent years, many physicists have used evolutionary game theory combined with a complex systems perspective in an attempt to understand social phenomena and challenges. Prominent among such phenomena is the issue of the emergence and sustainability of cooperation in a networked world of selfish or self-focused individuals. The vast majority of research done by physicists on these questions is theoretical, and is almost always posed in terms of agent-based models. Unfortunately, more often than not such models ignore a number of facts that are well established experimentally, and are thus rendered irrelevant to actual social applications. I here summarize some of the facts that any realistic model should incorporate and take into account, discuss important aspects underlying the relation between theory and experiments, and discuss future directions for research based on the available experimental knowledge.

  6. A theoretical model of rain–wind–induced in-plane galloping on overhead transmission tower-lines system

    Directory of Open Access Journals (Sweden)

    Chao Zhou

    2015-09-01

    Full Text Available Rain–wind–induced galloping phenomenon often occurs on overhead transmission tower-lines system, just as icing galloping and vortex-excited vibration; this kind of instability oscillation can cause power-line breakage or tower failure. However, the existing theoretical models of rain–wind–induced galloping are mainly based on the hypothesis of the overhead power-lines with fixed ends, which is inconsistent with the actual operation situation. Therefore, this article thus presents a preliminary theoretical study and proposes a new theoretical model taking into account the effect of tower excitations on the in-plane galloping of the overhead power-line and on the motion of the upper rain-line. The theoretical model is solved by Galerkin method and verified by the comparison with the test data obtained in the available literature involved with the overhead power-lines with fixed towers or moving towers. It turns out that the tower excitations may intensify the in-plane galloping amplitude of the overhead power-line within a certain range of frequency ratio and enable better comprehension of rain–wind–induced galloping mechanism.

  7. A game theoretic framework for evaluation of the impacts of hackers diversity on security measures

    International Nuclear Information System (INIS)

    Zare Moayedi, Behzad; Azgomi, Mohammad Abdollahi

    2012-01-01

    Game theoretical methods offer new insights into quantitative evaluation of dependability and security. Currently, there is a wide range of useful game theoretic approaches to model the behaviour of intelligent agents. However, it is necessary to revise these approaches if there is a community of hackers with significant diversity in their behaviours. In this paper, we introduce a novel approach to extend the basic ideas of applying game theory in stochastic modelling. The proposed method classifies the community of hackers based on two main criteria used widely in hacker classifications, which are motivation and skill. We use Markov chains to model the system and compute the transition rates between the states based on the preferences and the skill distributions of hacker classes. The resulting Markov chains can be solved to obtain the desired security measures. We also present the results of an illustrative example using the proposed approach, which examines the relation between the attributes of the community of hackers and the security measures.

  8. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    Science.gov (United States)

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents

  9. The Safety Culture Enactment Questionnaire (SCEQ): Theoretical model and empirical validation.

    Science.gov (United States)

    de Castro, Borja López; Gracia, Francisco J; Tomás, Inés; Peiró, José M

    2017-06-01

    This paper presents the Safety Culture Enactment Questionnaire (SCEQ), designed to assess the degree to which safety is an enacted value in the day-to-day running of nuclear power plants (NPPs). The SCEQ is based on a theoretical safety culture model that is manifested in three fundamental components of the functioning and operation of any organization: strategic decisions, human resources practices, and daily activities and behaviors. The extent to which the importance of safety is enacted in each of these three components provides information about the pervasiveness of the safety culture in the NPP. To validate the SCEQ and the model on which it is based, two separate studies were carried out with data collection in 2008 and 2014, respectively. In Study 1, the SCEQ was administered to the employees of two Spanish NPPs (N=533) belonging to the same company. Participants in Study 2 included 598 employees from the same NPPs, who completed the SCEQ and other questionnaires measuring different safety outcomes (safety climate, safety satisfaction, job satisfaction and risky behaviors). Study 1 comprised item formulation and examination of the factorial structure and reliability of the SCEQ. Study 2 tested internal consistency and provided evidence of factorial validity, validity based on relationships with other variables, and discriminant validity between the SCEQ and safety climate. Exploratory Factor Analysis (EFA) carried out in Study 1 revealed a three-factor solution corresponding to the three components of the theoretical model. Reliability analyses showed strong internal consistency for the three scales of the SCEQ, and each of the 21 items on the questionnaire contributed to the homogeneity of its theoretically developed scale. Confirmatory Factor Analysis (CFA) carried out in Study 2 supported the internal structure of the SCEQ; internal consistency of the scales was also supported. Furthermore, the three scales of the SCEQ showed the expected correlation

  10. Theoretical model of the density of states of random binary alloys

    International Nuclear Information System (INIS)

    Zekri, N.; Brezini, A.

    1991-09-01

    A theoretical formulation of the density of states for random binary alloys is examined based on a mean field treatment. The present model includes both diagonal and off-diagonal disorder and also short-range order. Extensive results are reported for various concentrations and compared to other calculations. (author). 22 refs, 6 figs

  11. Developing a theoretical maintenance model for disordered eating in Type 1 diabetes.

    Science.gov (United States)

    Treasure, J; Kan, C; Stephenson, L; Warren, E; Smith, E; Heller, S; Ismail, K

    2015-12-01

    According to the literature, eating disorders are an increasing problem for more than a quarter of people with Type 1 diabetes and they are associated with accentuated diabetic complications. The clinical outcomes in this group when given standard eating disorder treatments are disappointing. The Medical Research Council guidelines for developing complex interventions suggest that the first step is to develop a theoretical model. To review existing literature to build a theoretical maintenance model for disordered eating in people with Type 1 diabetes. The literature in diabetes relating to models of eating disorder (Fairburn's transdiagnostic model and the dual pathway model) and food addiction was examined and assimilated. The elements common to all eating disorder models include weight/shape concern and problems with mood regulation. The predisposing traits of perfectionism, low self-esteem and low body esteem and the interpersonal difficulties from the transdiagnostic model are also relevant to diabetes. The differences include the use of insulin mismanagement to compensate for breaking eating rules and the consequential wide variations in plasma glucose that may predispose to 'food addiction'. Eating disorder symptoms elicit emotionally driven reactions and behaviours from others close to the individual affected and these are accentuated in the context of diabetes. The next stage is to test the assumptions within the maintenance model with experimental medicine studies to facilitate the development of new technologies aimed at increasing inhibitory processes and moderating environmental triggers. © 2015 The Authors. Diabetic Medicine © 2015 Diabetes UK.

  12. A Theoretical Model for Estimation of Yield Strength of Fiber Metal Laminate

    Science.gov (United States)

    Bhat, Sunil; Nagesh, Suresh; Umesh, C. K.; Narayanan, S.

    2017-08-01

    The paper presents a theoretical model for estimation of yield strength of fiber metal laminate. Principles of elasticity and formulation of residual stress are employed to determine the stress state in metal layer of the laminate that is found to be higher than the stress applied over the laminate resulting in reduced yield strength of the laminate in comparison with that of the metal layer. The model is tested over 4A-3/2 Glare laminate comprising three thin aerospace 2014-T6 aluminum alloy layers alternately bonded adhesively with two prepregs, each prepreg built up of three uni-directional glass fiber layers laid in longitudinal and transverse directions. Laminates with prepregs of E-Glass and S-Glass fibers are investigated separately under uni-axial tension. Yield strengths of both the Glare variants are found to be less than that of aluminum alloy with use of S-Glass fiber resulting in higher laminate yield strength than with the use of E-Glass fiber. Results from finite element analysis and tensile tests conducted over the laminates substantiate the theoretical model.

  13. Theoretical study of evaporation heat transfer in horizontal microfin tubes: stratified flow model

    Energy Technology Data Exchange (ETDEWEB)

    Honda, H; Wang, Y S [Kyushu Univ., Inst. for Materials Chemistry and Engineering, Kasuga, Fukuoka (Japan)

    2004-08-01

    The stratified flow model of evaporation heat transfer in helically grooved, horizontal microfin tubes has been developed. The profile of stratified liquid was determined by a theoretical model previously developed for condensation in horizontal microfin tubes. For the region above the stratified liquid, the meniscus profile in the groove between adjacent fins was determined by a force balance between the gravity and surface tension forces. The thin film evaporation model was applied to predict heat transfer in the thin film region of the meniscus. Heat transfer through the stratified liquid was estimated by using an empirical correlation proposed by Mori et al. The theoretical predictions of the circumferential average heat transfer coefficient were compared with available experimental data for four tubes and three refrigerants. A good agreement was obtained for the region of Fr{sub 0}<2.5 as long as partial dry out of tube surface did not occur. (Author)

  14. Universe in the theoretical model «Evolving matter»

    Directory of Open Access Journals (Sweden)

    Bazaluk Oleg

    2013-04-01

    Full Text Available The article critically examines modern model of the Universe evolution constructed by efforts of a group of scientists (mathematicians, physicists and cosmologists from the world's leading universities (Oxford and Cambridge Universities, Yale, Columbia, New York, Rutgers and the UC Santa Cruz. The author notes its strengths, but also points to shortcomings. Author believes that this model does not take into account the most important achievements in the field of biochemistry and biology (molecular, physical, developmental, etc., as well as neuroscience and psychology. Author believes that in the construction of model of the Universe evolution, scientists must take into account (with great reservations the impact of living and intelligent matter on space processes. As an example, the author gives his theoretical model "Evolving matter". In this model, he shows not only the general dependence of the interaction of cosmic processes with inert, living and intelligent matter, but also he attempts to show the direct influence of systems of living and intelligent matter on the acceleration of the Universe's expansion.

  15. Pragmatic impact of workplace ostracism: toward a theoretical model

    Directory of Open Access Journals (Sweden)

    Amer Ali Al-Atwi

    2017-07-01

    Full Text Available Purpose - The purpose of this paper is to extend the ostracism literature by exploring the pragmatic impact of ostracism on performance. Design/methodology/approach - Ostracism workplace, social relations and empowerment structures are discussed. The paper then develops a theoretical framework that explains why and under what conditions workplace ostracism undermines employees’ performance. The author proposes that empowerment structures mediate the link between ostracism and in-role and extra-role performance. In addition, it was proposed that relational links buffer the negative relationship between ostracism and empowerment structures on performance and weaken the negative indirect effect of ostracism on performance. Findings - The theoretical arguments provide support for the model showing that empowerment structures mediate the relationship between ostracism and performance, and the mediation effect only occurred when external links were high but not when external links were low. Originality/value - The author has expanded the extant literature by answering recent calls for research exploring the pragmatic impact of workplace ostracism where past research has typically focused solely on the psychological impacts such as psychological needs.

  16. Modelling the evolution and consequences of mate choice

    OpenAIRE

    Tazzyman, S. J.

    2010-01-01

    This thesis considers the evolution and the consequences of mate choice across a variety of taxa, using game theoretic, population genetic, and quantitative genetic modelling techniques. Part I is about the evolution of mate choice. In chapter 2, a population genetic model shows that mate choice is even beneficial in self-fertilising species such as Saccharomyces yeast. In chapter 3, a game theoretic model shows that female choice will be strongly dependent upon whether the benefi...

  17. Development Mechanism of an Integrated Model for Training of a Specialist and Conceptual-Theoretical Activity of a Teacher

    Science.gov (United States)

    Marasulov, Akhmat; Saipov, Amangeldi; ?rymbayeva, Kulimkhan; Zhiyentayeva, Begaim; Demeuov, Akhan; Konakbaeva, Ulzhamal; Bekbolatova, Akbota

    2016-01-01

    The aim of the study is to examine the methodological-theoretical construction bases for development mechanism of an integrated model for a specialist's training and teacher's conceptual-theoretical activity. Using the methods of generalization of teaching experience, pedagogical modeling and forecasting, the authors determine the urgent problems…

  18. Experimental and theoretical study of the energy loss of C and O in Zn

    Energy Technology Data Exchange (ETDEWEB)

    Cantero, E. D.; Lantschner, G. H.; Arista, N. R. [Centro Atomico Bariloche and Instituto Balseiro, Comision Nacional de Energia Atomica, 8400 San Carlos de Bariloche (Argentina); Montanari, C. C.; Miraglia, J. E. [Instituto de Astronomia y Fisica del Espacio (CONICET-UBA), Buenos Aires (Argentina); Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires (Argentina); Behar, M.; Fadanelli, R. C. [Instituto de Fisica, Universidade Federal do Rio Grande do Sul, Avenida Bento Goncalves 9500, Porto Alegre-RS (Brazil)

    2011-07-15

    We present a combined experimental-theoretical study of the energy loss of C and O ions in Zn in the energy range 50-1000 keV/amu. This contribution has a double purpose, experimental and theoretical. On the experimental side, we present stopping power measurements that fill a gap in the literature for these projectile-target combinations and cover an extended energy range, including the stopping maximum. On the theoretical side, we make a quantitative test on the applicability of various theoretical approaches to calculate the energy loss of heavy swift ions in solids. The description is performed using different models for valence and inner-shell electrons: a nonperturbative scattering calculation based on the transport cross section formalism to describe the Zn valence electron contribution, and two different models for the inner-shell contribution: the shellwise local plasma approximation (SLPA) and the convolution approximation for swift particles (CasP). The experimental results indicate that C is the limit for the applicability of the SLPA approach, which previously was successfully applied to projectiles from H to B. We find that this model clearly overestimates the stopping data for O ions. The origin of these discrepancies is related to the perturbative approximation involved in the SLPA. This shortcoming has been solved by using the nonperturbative CasP results to describe the inner-shell contribution, which yields a very good agreement with the experiments for both C and O ions.

  19. Theoretical model of gravitational perturbation of current collector axisymmetric flow field

    Science.gov (United States)

    Walker, John S.; Brown, Samuel H.; Sondergaard, Neal A.

    1990-05-01

    Some designs of liquid-metal current collectors in homopolar motors and generators are essentially rotating liquid-metal fluids in cylindrical channels with free surfaces and will, at critical rotational speeds, become unstable. An investigation at David Taylor Research Center is being performed to understand the role of gravity in modifying this ejection instability. Some gravitational effects can be theoretically treated by perturbation techniques on the axisymmetric base flow of the liquid metal. This leads to a modification of previously calculated critical-current-collector ejection values neglecting gravity effects. The purpose of this paper is to document the derivation of the mathematical model which determines the perturbation of the liquid-metal base flow due to gravitational effects. Since gravity is a small force compared with the centrifugal effects, the base flow solutions can be expanded in inverse powers of the Froude number and modified liquid-flow profiles can be determined as a function of the azimuthal angle. This model will be used in later work to theoretically study the effects of gravity on the ejection point of the current collector.

  20. Theoretical models to predict the mechanical behavior of thick composite tubes

    Directory of Open Access Journals (Sweden)

    Volnei Tita

    2012-02-01

    Full Text Available This paper shows theoretical models (analytical formulations to predict the mechanical behavior of thick composite tubes and how some parameters can influence this behavior. Thus, firstly, it was developed the analytical formulations for a pressurized tube made of composite material with a single thick ply and only one lamination angle. For this case, the stress distribution and the displacement fields are investigated as function of different lamination angles and reinforcement volume fractions. The results obtained by the theoretical model are physic consistent and coherent with the literature information. After that, the previous formulations are extended in order to predict the mechanical behavior of a thick laminated tube. Both analytical formulations are implemented as a computational tool via Matlab code. The results obtained by the computational tool are compared to the finite element analyses, and the stress distribution is considered coherent. Moreover, the engineering computational tool is used to perform failure analysis, using different types of failure criteria, which identifies the damaged ply and the mode of failure.

  1. Theoretical study on the inverse modeling of deep body temperature measurement

    International Nuclear Information System (INIS)

    Huang, Ming; Chen, Wenxi

    2012-01-01

    We evaluated the theoretical aspects of monitoring the deep body temperature distribution with the inverse modeling method. A two-dimensional model was built based on anatomical structure to simulate the human abdomen. By integrating biophysical and physiological information, the deep body temperature distribution was estimated from cutaneous surface temperature measurements using an inverse quasilinear method. Simulations were conducted with and without the heat effect of blood perfusion in the muscle and skin layers. The results of the simulations showed consistently that the noise characteristics and arrangement of the temperature sensors were the major factors affecting the accuracy of the inverse solution. With temperature sensors of 0.05 °C systematic error and an optimized 16-sensor arrangement, the inverse method could estimate the deep body temperature distribution with an average absolute error of less than 0.20 °C. The results of this theoretical study suggest that it is possible to reconstruct the deep body temperature distribution with the inverse method and that this approach merits further investigation. (paper)

  2. Entropic and Electrostatic Effects on the Folding Free Energy of a Surface-Attached Biomolecule: An Experimental and Theoretical Study

    Science.gov (United States)

    Watkins, Herschel M.; Vallée-Bélisle, Alexis; Ricci, Francesco; Makarov, Dmitrii E.; Plaxco, Kevin W.

    2012-01-01

    Surface-tethered biomolecules play key roles in many biological processes and biotechnologies. However, while the physical consequences of such surface attachment have seen significant theoretical study, to date this issue has seen relatively little experimental investigation. In response we present here a quantitative experimental and theoretical study of the extent to which attachment to a charged –but otherwise apparently inert– surface alters the folding free energy of a simple biomolecule. Specifically, we have measured the folding free energy of a DNA stem loop both in solution and when site-specifically attached to a negatively charged, hydroxyl-alkane-coated gold surface. We find that, whereas surface attachment is destabilizing at low ionic strength it becomes stabilizing at ionic strengths above ~130 mM. This behavior presumably reflects two competing mechanisms: excluded volume effects, which stabilize the folded conformation by reducing the entropy of the unfolded state, and electrostatics, which, at lower ionic strengths, destabilizes the more compact folded state via repulsion from the negatively charged surface. To test this hypothesis we have employed existing theories of the electrostatics of surface-bound polyelectrolytes and the entropy of surface-bound polymers to model both effects. Despite lacking any fitted parameters, these theoretical models quantitatively fit our experimental results, suggesting that, for this system, current knowledge of both surface electrostatics and excluded volume effects is reasonably complete and accurate. PMID:22239220

  3. Theoretical study of solvent effects on the coil-globule transition

    Science.gov (United States)

    Polson, James M.; Opps, Sheldon B.; Abou Risk, Nicholas

    2009-06-01

    The coil-globule transition of a polymer in a solvent has been studied using Monte Carlo simulations of a single chain subject to intramolecular interactions as well as a solvent-mediated effective potential. This solvation potential was calculated using several different theoretical approaches for two simple polymer/solvent models, each employing hard-sphere chains and hard-sphere solvent particles as well as attractive square-well potentials between some interaction sites. For each model, collapse is driven by variation in a parameter which changes the energy mismatch between monomers and solvent particles. The solvation potentials were calculated using two fundamentally different methodologies, each designed to predict the conformational behavior of polymers in solution: (1) the polymer reference interaction site model (PRISM) theory and (2) a many-body solvation potential (MBSP) based on scaled particle theory introduced by Grayce [J. Chem. Phys. 106, 5171 (1997)]. For the PRISM calculations, two well-studied solvation monomer-monomer pair potentials were employed, each distinguished by the closure relation used in its derivation: (i) a hypernetted-chain (HNC)-type potential and (ii) a Percus-Yevick (PY)-type potential. The theoretical predictions were each compared to results obtained from explicit-solvent discontinuous molecular dynamics simulations on the same polymer/solvent model systems [J. Chem. Phys. 125, 194904 (2006)]. In each case, the variation in the coil-globule transition properties with solvent density is mostly qualitatively correct, though the quantitative agreement between the theory and prediction is typically poor. The HNC-type potential yields results that are more qualitatively consistent with simulation. The conformational behavior of the polymer upon collapse predicted by the MBSP approach is quantitatively correct for low and moderate solvent densities but is increasingly less accurate for higher densities. At high solvent densities

  4. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    International Nuclear Information System (INIS)

    Bindschadler, Michael; Alessio, Adam M; Modgil, Dimple; La Riviere, Patrick J; Branch, Kelley R

    2014-01-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g) −1 , cardiac output = 3, 5, 8 L min −1 ). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  5. Three General Theoretical Models in Sociology: An Articulated ?(Disunity?

    Directory of Open Access Journals (Sweden)

    Thaís García-Pereiro

    2015-01-01

    Full Text Available After merely a brief, comparative reconstruction of the three most general theoretical models underlying contemporary Sociology (atomic, systemic, and fluid it becomes necessary to review the question about the unity or plurality of Sociology, which is the main objective of this paper. To do so, the basic terms of the question are firstly updated by following the hegemonic trends in current studies of science. Secondly the convergences and divergences among the three models discussed are shown. Following some additional discussion, the conclusion is reached that contemporary Sociology is not unitary, and need not be so. It is plural, but its plurality is limited and articulated by those very models. It may therefore be portrayed as integrated and commensurable, to the extent that a partial and unstable (disunity may be said to exist in Sociology, which is not too far off from what happens in the natural sciences.

  6. Theoretical Aspects of Erroneous Actions During the Process of Decision Making by Air Traffic Control

    Directory of Open Access Journals (Sweden)

    Andersone Silva

    2017-08-01

    Full Text Available The Theoretical Aspects of Erroneous Actions During the Process of Decision Making by Air Traffic Control evaluates the factors affecting the operational decision-making of a human air traffic controller, interacting in a dynamic environment with the flight crew, surrounding aircraft traffic and environmental conditions of the airspace. This article reviews the challenges of air traffic control in different conditions, ranging from normal and complex to emergency and catastrophic. Workload factors and operating conditions make an impact on air traffic controllers’ decision-making. The proposed model compares various operating conditions within an assumed air traffic control environment subsequently comparing them against a theoretically “perfect” air traffic control system. A mathematical model of flight safety assessment has been proposed for the quantitative assessment of various hazards arising during the process of Air Traffic Control. The model assumes events of various severity and probability ranging from high frequency and low severity up to less likely and catastrophic ones. Certain limitations of the model have been recognised and further improvements for effective hazard evaluation have been suggested.

  7. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    International Nuclear Information System (INIS)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors

  8. Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi

    2016-01-01

    Recent advancements in set theory and readily available software have enabled social science researchers to bridge the variable-centered quantitative and case-based qualitative methodological paradigms in order to analyze multi-dimensional associations beyond the linearity assumptions, aggregate...... effects, unicausal reduction, and case specificity. Based on the developments in set theoretical thinking in social sciences and employing methods like Qualitative Comparative Analysis (QCA), Necessary Condition Analysis (NCA), and set visualization techniques, in this position paper, we propose...... and demonstrate a new approach to maturity models in the domain of Information Systems. This position paper describes the set-theoretical approach to maturity models, presents current results and outlines future research work....

  9. Recent trends in social systems quantitative theories and quantitative models

    CERN Document Server

    Hošková-Mayerová, Šárka; Soitu, Daniela-Tatiana; Kacprzyk, Janusz

    2017-01-01

    The papers collected in this volume focus on new perspectives on individuals, society, and science, specifically in the field of socio-economic systems. The book is the result of a scientific collaboration among experts from “Alexandru Ioan Cuza” University of Iaşi (Romania), “G. d’Annunzio” University of Chieti-Pescara (Italy), "University of Defence" of Brno (Czech Republic), and "Pablo de Olavide" University of Sevilla (Spain). The heterogeneity of the contributions presented in this volume reflects the variety and complexity of social phenomena. The book is divided in four Sections as follows. The first Section deals with recent trends in social decisions. Specifically, it aims to understand which are the driving forces of social decisions. The second Section focuses on the social and public sphere. Indeed, it is oriented on recent developments in social systems and control. Trends in quantitative theories and models are described in Section 3, where many new formal, mathematical-statistical to...

  10. Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling

    DEFF Research Database (Denmark)

    Zimmermann, Ralf; Bertram, Anna

    2018-01-01

    Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...

  11. Theoretical Models and Operational Frameworks in Public Health Ethics

    Science.gov (United States)

    Petrini, Carlo

    2010-01-01

    The article is divided into three sections: (i) an overview of the main ethical models in public health (theoretical foundations); (ii) a summary of several published frameworks for public health ethics (practical frameworks); and (iii) a few general remarks. Rather than maintaining the superiority of one position over the others, the main aim of the article is to summarize the basic approaches proposed thus far concerning the development of public health ethics by describing and comparing the various ideas in the literature. With this in mind, an extensive list of references is provided. PMID:20195441

  12. Linking agent-based models and stochastic models of financial markets.

    Science.gov (United States)

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene

    2012-05-29

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.

  13. Health Professionals' Explanations of Suicidal Behaviour: Effects of Professional Group, Theoretical Intervention Model, and Patient Suicide Experience.

    Science.gov (United States)

    Rothes, Inês Areal; Henriques, Margarida Rangel

    2017-12-01

    In a help relation with a suicidal person, the theoretical models of suicidality can be essential to guide the health professional's comprehension of the client/patient. The objectives of this study were to identify health professionals' explanations of suicidal behaviors and to study the effects of professional group, theoretical intervention models, and patient suicide experience in professionals' representations. Two hundred and forty-two health professionals filled out a self-report questionnaire. Exploratory principal components analysis was used. Five explanatory models were identified: psychological suffering, affective cognitive, sociocommunicational, adverse life events, and psychopathological. Results indicated that the psychological suffering and psychopathological models were the most valued by the professionals, while the sociocommunicational was seen as the least likely to explain suicidal behavior. Differences between professional groups were found. We concluded that training and reflection on theoretical models in general and in communicative issues in particular are needed in the education of health professionals.

  14. A game theoretic model of the Northwestern European electricity market-market power and the environment

    NARCIS (Netherlands)

    Lise, W.; Linderhof, V.G.M.; Kuik, O.; Kemfert, C.; Ostling, R.; Heinzow, T.

    2006-01-01

    This paper develops a static computational game theoretic model. Illustrative results for the liberalising European electricity market are given to demonstrate the type of economic and environmental results that can be generated with the model. The model is empirically calibrated to eight

  15. Kinetic Adsorption Study of Silver Nanoparticles on Natural Zeolite: Experimental and Theoretical Models

    Directory of Open Access Journals (Sweden)

    Alvaro Ruíz-Baltazar

    2015-12-01

    Full Text Available In this research, the adsorption capacity of Ag nanoparticles on natural zeolite from Oaxaca is presented. In order to describe the adsorption mechanism of silver nanoparticles on zeolite, experimental adsorption models for Ag ions and Ag nanoparticles were carried out. These experimental data obtained by the atomic absorption spectrophotometry technique were compared with theoretical models such as Lagergren first-order, pseudo-second-order, Elovich, and intraparticle diffusion. Correlation factors R2 of the order of 0.99 were observed. Analysis by transmission electron microscopy describes the distribution of the silver nanoparticles on the zeolite outer surface. Additionally, a chemical characterization of the material was carried out through a dilution process with lithium metaborate. An average value of 9.3 in the Si/Al ratio was observed. Factors such as the adsorption behavior of the silver ions and the Si/Al ratio of the zeolite are very important to support the theoretical models and establish the adsorption mechanism of Ag nanoparticles on natural zeolite.

  16. A consistent meson-theoretic description of the NN-interaction

    International Nuclear Information System (INIS)

    Machleidt, R.

    1985-01-01

    In this paper, the meson-theory of the NN-interaction is performed consistently. All irreducible diagrams up to a total exchanged mass of about 1 GeV (i. e. up to the cutoff region) are taken into account. These diagrams contain in particular an explicit field theoretic model for the 2π-exchange taking into account virtual Δ-excitation and direct π π-interaction. This part of the model agrees quantitatively with results obtained from dispersion theory which in turn are based on the analysis of πN- and π π-scattering data. A detailed description of the lower partial wave phase-shifts of NN-scattering requires the introduction of irreducible diagrams containing also heavy boson exchange, in particular the combination of π and rho. In the framework of this consistent meson theory an accurate description of the NN-scattering data below 300 MeV laboratory energy as well as the deuteron data is achieved; the numerical results are superior to those of simplified boson exchange models

  17. The interrogation decision-making model: A general theoretical framework for confessions.

    Science.gov (United States)

    Yang, Yueran; Guyll, Max; Madon, Stephanie

    2017-02-01

    This article presents a new model of confessions referred to as the interrogation decision-making model . This model provides a theoretical umbrella with which to understand and analyze suspects' decisions to deny or confess guilt in the context of a custodial interrogation. The model draws upon expected utility theory to propose a mathematical account of the psychological mechanisms that not only underlie suspects' decisions to deny or confess guilt at any specific point during an interrogation, but also how confession decisions can change over time. Findings from the extant literature pertaining to confessions are considered to demonstrate how the model offers a comprehensive and integrative framework for organizing a range of effects within a limited set of model parameters. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Hardening of particle/oil/water suspensions due to capillary bridges: Experimental yield stress and theoretical interpretation.

    Science.gov (United States)

    Danov, Krassimir D; Georgiev, Mihail T; Kralchevsky, Peter A; Radulova, Gergana M; Gurkov, Theodor D; Stoyanov, Simeon D; Pelan, Eddie G

    2018-01-01

    Suspensions of colloid particles possess the remarkable property to solidify upon the addition of minimal amount of a second liquid that preferentially wets the particles. The hardening is due to the formation of capillary bridges (pendular rings), which connect the particles. Here, we review works on the mechanical properties of such suspensions and related works on the capillary-bridge force, and present new rheological data for the weakly studied concentration range 30-55 vol% particles. The mechanical strength of the solidified capillary suspensions, characterized by the yield stress Y, is measured at the elastic limit for various volume fractions of the particles and the preferentially wetting liquid. A quantitative theoretical model is developed, which relates Y with the maximum of the capillary-bridge force, projected on the shear plane. A semi-empirical expression for the mean number of capillary bridges per particle is proposed. The model agrees very well with the experimental data and gives a quantitative description of the yield stress, which increases with the rise of interfacial tension and with the volume fractions of particles and capillary bridges, but decreases with the rise of particle radius and contact angle. The quantitative description of capillary force is based on the exact theory and numerical calculation of the capillary bridge profile at various bridge volumes and contact angles. An analytical formula for Y is also derived. The comparison of the theoretical and experimental strain at the elastic limit reveals that the fluidization of the capillary suspension takes place only in a deformation zone of thickness up to several hundred particle diameters, which is adjacent to the rheometer's mobile plate. The reported experimental results refer to water-continuous suspension with hydrophobic particles and oily capillary bridges. The comparison of data for bridges from soybean oil and hexadecane surprisingly indicate that the yield strength is

  19. Group theoretical construction of two-dimensional models with infinite sets of conservation laws

    International Nuclear Information System (INIS)

    D'Auria, R.; Regge, T.; Sciuto, S.

    1980-01-01

    We explicitly construct some classes of field theoretical 2-dimensional models associated with symmetric spaces G/H according to a general scheme proposed in an earlier paper. We treat the SO(n + 1)/SO(n) and SU(n + 1)/U(n) case, giving their relationship with the O(n) sigma-models and the CP(n) models. Moreover, we present a new class of models associated to the SU(n)/SO(n) case. All these models are shown to possess an infinite set of local conservation laws. (orig.)

  20. A quantitative risk-based model for reasoning over critical system properties

    Science.gov (United States)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  1. Photoluminescence of crystalline and disordered BTO:Mn powder: Experimental and theoretical modeling

    International Nuclear Information System (INIS)

    Gurgel, M.F.C.; Espinosa, J.W.M.; Campos, A.B.; Rosa, I.L.V.; Joya, M.R.; Souza, A.G.; Zaghete, M.A.; Pizani, P.S.; Leite, E.R.; Varela, J.A.; Longo, E.

    2007-01-01

    Disordered and crystalline Mn-doped BaTiO 3 (BTO:Mn) powders were synthesized by the polymeric precursor method. After heat treatment, the nature of visible photoluminescence (PL) at room temperature in amorphous BTO:Mn was discussed, considering results of experimental and theoretical studies. X-ray diffraction (XRD), PL, and UV-vis were used to characterize this material. Rietveld refinement of the BTO:Mn from XRD data was used to built two models, which represent the crystalline BTO:Mn (BTO:Mn c ) and disordered BTO:Mn (BTO:Mn d ) structures. Theses models were analyzed by the periodic ab initio quantum mechanical calculations using the CRYSTAL98 package within the framework of density functional theory at the B3LYP level. The experimental and theoretical results indicated that PL is related with the degree of disorder in the BTO:Mn powders and also suggests the presence of localized states in the disordered structure

  2. Tip-Enhanced Raman Voltammetry: Coverage Dependence and Quantitative Modeling.

    Science.gov (United States)

    Mattei, Michael; Kang, Gyeongwon; Goubert, Guillaume; Chulhai, Dhabih V; Schatz, George C; Jensen, Lasse; Van Duyne, Richard P

    2017-01-11

    Electrochemical atomic force microscopy tip-enhanced Raman spectroscopy (EC-AFM-TERS) was employed for the first time to observe nanoscale spatial variations in the formal potential, E 0' , of a surface-bound redox couple. TERS cyclic voltammograms (TERS CVs) of single Nile Blue (NB) molecules were acquired at different locations spaced 5-10 nm apart on an indium tin oxide (ITO) electrode. Analysis of TERS CVs at different coverages was used to verify the observation of single-molecule electrochemistry. The resulting TERS CVs were fit to the Laviron model for surface-bound electroactive species to quantitatively extract the formal potential E 0' at each spatial location. Histograms of single-molecule E 0' at each coverage indicate that the electrochemical behavior of the cationic oxidized species is less sensitive to local environment than the neutral reduced species. This information is not accessible using purely electrochemical methods or ensemble spectroelectrochemical measurements. We anticipate that quantitative modeling and measurement of site-specific electrochemistry with EC-AFM-TERS will have a profound impact on our understanding of the role of nanoscale electrode heterogeneity in applications such as electrocatalysis, biological electron transfer, and energy production and storage.

  3. Quantitative ion implantation

    International Nuclear Information System (INIS)

    Gries, W.H.

    1976-06-01

    This is a report of the study of the implantation of heavy ions at medium keV-energies into electrically conducting mono-elemental solids, at ion doses too small to cause significant loss of the implanted ions by resputtering. The study has been undertaken to investigate the possibility of accurate portioning of matter in submicrogram quantities, with some specific applications in mind. The problem is extensively investigated both on a theoretical level and in practice. A mathematical model is developed for calculating the loss of implanted ions by resputtering as a function of the implanted ion dose and the sputtering yield. Numerical data are produced therefrom which permit a good order-of-magnitude estimate of the loss for any ion/solid combination in which the ions are heavier than the solid atoms, and for any ion energy from 10 to 300 keV. The implanted ion dose is measured by integration of the ion beam current, and equipment and techniques are described which make possible the accurate integration of an ion current in an electromagnetic isotope separator. The methods are applied to two sample cases, one being a stable isotope, the other a radioisotope. In both cases independent methods are used to show that the implantation is indeed quantitative, as predicted. At the same time the sample cases are used to demonstrate two possible applications for quantitative ion implantation, viz. firstly for the manufacture of calibration standards for instrumental micromethods of elemental trace analysis in metals, and secondly for the determination of the half-lives of long-lived radioisotopes by a specific activity method. It is concluded that the present study has advanced quantitative ion implantation to the state where it can be successfully applied to the solution of problems in other fields

  4. Theoretical Background for the Decision-Making Process Modelling under Controlled Intervention Conditions

    OpenAIRE

    Bakanauskienė Irena; Baronienė Laura

    2017-01-01

    This article is intended to theoretically justify the decision-making process model for the cases, when active participation of investing entities in controlling the activities of an organisation and their results is noticeable. Based on scientific literature analysis, a concept of controlled conditions is formulated, and using a rational approach to the decision-making process, a model of the 11-steps decision-making process under controlled intervention is presented. Also, there have been u...

  5. Genomic value prediction for quantitative traits under the epistatic model

    Directory of Open Access Journals (Sweden)

    Xu Shizhong

    2011-01-01

    Full Text Available Abstract Background Most quantitative traits are controlled by multiple quantitative trait loci (QTL. The contribution of each locus may be negligible but the collective contribution of all loci is usually significant. Genome selection that uses markers of the entire genome to predict the genomic values of individual plants or animals can be more efficient than selection on phenotypic values and pedigree information alone for genetic improvement. When a quantitative trait is contributed by epistatic effects, using all markers (main effects and marker pairs (epistatic effects to predict the genomic values of plants can achieve the maximum efficiency for genetic improvement. Results In this study, we created 126 recombinant inbred lines of soybean and genotyped 80 makers across the genome. We applied the genome selection technique to predict the genomic value of somatic embryo number (a quantitative trait for each line. Cross validation analysis showed that the squared correlation coefficient between the observed and predicted embryo numbers was 0.33 when only main (additive effects were used for prediction. When the interaction (epistatic effects were also included in the model, the squared correlation coefficient reached 0.78. Conclusions This study provided an excellent example for the application of genome selection to plant breeding.

  6. [A quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse].

    Science.gov (United States)

    Zhang, Yu; Chen, Yuzhen; Hu, Chunguang; Zhang, Huaning; Bi, Zhenwang; Bi, Zhenqiang

    2015-05-01

    To construct a quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse and to find out effective interventions to reduce salmonella contamination. We constructed a modular process risk model (MPRM) from evisceration to chilling in Excel Sheet using the data of the process parameters in poultry and the Salmomella concentration surveillance of Jinan in 2012. The MPRM was simulated by @ risk software. The concentration of salmonella on carcass after chilling was 1.96MPN/g which was calculated by model. The sensitive analysis indicated that the correlation coefficient of the concentration of salmonella after defeathering and in chilling pool were 0.84 and 0.34,which were the primary factors to the concentration of salmonella on carcass after chilling. The study provided a quantitative assessment model structure for salmonella on carcass in poultry slaughterhouse. The risk manager could control the contamination of salmonella on carcass after chilling by reducing the concentration of salmonella after defeathering and in chilling pool.

  7. Modeling opinion dynamics: Theoretical analysis and continuous approximation

    International Nuclear Information System (INIS)

    Pinasco, Juan Pablo; Semeshenko, Viktoriya; Balenzuela, Pablo

    2017-01-01

    Highlights: • We study a simple model of persuasion dynamics with long range pairwise interactions. • The continuous limit of the master equation is a nonlinear, nonlocal, first order partial differential equation. • We compute the analytical solutions to this equation, and compare them with the simulations of the dynamics. - Abstract: Frequently we revise our first opinions after talking over with other individuals because we get convinced. Argumentation is a verbal and social process aimed at convincing. It includes conversation and persuasion and the agreement is reached because the new arguments are incorporated. Given the wide range of opinion formation mathematical approaches, there are however no models of opinion dynamics with nonlocal pair interactions analytically solvable. In this paper we present a novel analytical framework developed to solve the master equations with non-local kernels. For this we used a simple model of opinion formation where individuals tend to get more similar after each interactions, no matter their opinion differences, giving rise to nonlinear differential master equation with non-local terms. Simulation results show an excellent agreement with results obtained by the theoretical estimation.

  8. A Production Model for Construction: A Theoretical Framework

    Directory of Open Access Journals (Sweden)

    Ricardo Antunes

    2015-03-01

    Full Text Available The building construction industry faces challenges, such as increasing project complexity and scope requirements, but shorter deadlines. Additionally, economic uncertainty and rising business competition with a subsequent decrease in profit margins for the industry demands the development of new approaches to construction management. However, the building construction sector relies on practices based on intuition and experience, overlooking the dynamics of its production system. Furthermore, researchers maintain that the construction industry has no history of the application of mathematical approaches to model and manage production. Much work has been carried out on how manufacturing practices apply to construction projects, mostly lean principles. Nevertheless, there has been little research to understand the fundamental mechanisms of production in construction. This study develops an in-depth literature review to examine the existing knowledge about production models and their characteristics in order to establish a foundation for dynamic production systems management in construction. As a result, a theoretical framework is proposed, which will be instrumental in the future development of mathematical production models aimed at predicting the performance and behaviour of dynamic project-based systems in construction.

  9. Theoretical background and experimental measurements of human brain noise intensity in perception of ambiguous images

    International Nuclear Information System (INIS)

    Runnova, Anastasiya E.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Kurovskaya, Maria K.; Pisarchik, Alexander N.

    2016-01-01

    We propose a theoretical approach associated with an experimental technique to quantitatively characterize cognitive brain activity in the perception of ambiguous images. Based on the developed theoretical background and the obtained experimental data, we introduce the concept of effective noise intensity characterizing cognitive brain activity and propose the experimental technique for its measurement. The developed theory, using the methods of statistical physics, provides a solid experimentally approved basis for further understanding of brain functionality. The rather simple way to measure the proposed quantitative characteristic of the brain activity related to the interpretation of ambiguous images will hopefully become a powerful tool for physicists, physiologists and medics. Our theoretical and experimental findings are in excellent agreement with each other.

  10. A semi-quantitative model for risk appreciation and risk weighing

    DEFF Research Database (Denmark)

    Bos, Peter M.J.; Boon, Polly E.; van der Voet, Hilko

    2009-01-01

    Risk managers need detailed information on (1) the type of effect, (2) the size (severity) of the expected effect(s) and (3) the fraction of the population at risk to decide on well-balanced risk reduction measures. A previously developed integrated probabilistic risk assessment (IPRA) model...... provides quantitative information on these three parameters. A semi-quantitative tool is presented that combines information on these parameters into easy-readable charts that will facilitate risk evaluations of exposure situations and decisions on risk reduction measures. This tool is based on a concept...... detailed information on the estimated health impact in a given exposure situation. These graphs will facilitate the discussions on appropriate risk reduction measures to be taken....

  11. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    Science.gov (United States)

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article. © 2013 Elsevier B.V. All rights reserved.

  12. Functional linear models for association analysis of quantitative traits.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  13. THEORETICAL FLOW MODEL THROUGH A CENTRIFUGAL PUMP USED FOR WATER SUPPLY IN AGRICULTURE IRRIGATION

    Directory of Open Access Journals (Sweden)

    SCHEAUA Fanel Dorel

    2017-05-01

    motion of the rotor. A theoretical model for calculating the flow of the working fluid through the interior of a centrifugal pump model is presented in this paper as well as the numerical analysis on the virtual model performed with the ANSYS CFX software in order to highlight the flow parameters and flow path-lines that are formed during centrifugal pump operation.

  14. A theoretical derivation of the Hoek–Brown failure criterion for rock materials

    Directory of Open Access Journals (Sweden)

    Jianping Zuo

    2015-08-01

    Full Text Available This study uses a three-dimensional crack model to theoretically derive the Hoek–Brown rock failure criterion based on the linear elastic fracture theory. Specifically, we argue that a failure characteristic factor needs to exceed a critical value when macro-failure occurs. This factor is a product of the micro-failure orientation angle (characterizing the density and orientation of damaged micro-cracks and the changing rate of the angle with respect to the major principal stress (characterizing the microscopic stability of damaged cracks. We further demonstrate that the factor mathematically leads to the empirical Hoek–Brown rock failure criterion. Thus, the proposed factor is able to successfully relate the evolution of microscopic damaged crack characteristics to macro-failure. Based on this theoretical development, we also propose a quantitative relationship between the brittle–ductile transition point and confining pressure, which is consistent with experimental observations.

  15. Using Mathematics, Mathematical Applications, Mathematical Modelling, and Mathematical Literacy: A Theoretical Study

    Science.gov (United States)

    Mumcu, Hayal Yavuz

    2016-01-01

    The purpose of this theoretical study is to explore the relationships between the concepts of using mathematics in the daily life, mathematical applications, mathematical modelling, and mathematical literacy. As these concepts are generally taken as independent concepts in the related literature, they are confused with each other and it becomes…

  16. Theoretical Models and Operational Frameworks in Public Health Ethics

    Directory of Open Access Journals (Sweden)

    Carlo Petrini

    2010-01-01

    Full Text Available The article is divided into three sections: (i an overview of the main ethical models in public health (theoretical foundations; (ii a summary of several published frameworks for public health ethics (practical frameworks; and (iii a few general remarks. Rather than maintaining the superiority of one position over the others, the main aim of the article is to summarize the basic approaches proposed thus far concerning the development of public health ethics by describing and comparing the various ideas in the literature. With this in mind, an extensive list of references is provided.

  17. Oxidation of organics in water in microfluidic electrochemical reactors: Theoretical model and experiments

    International Nuclear Information System (INIS)

    Scialdone, Onofrio; Guarisco, Chiara; Galia, Alessandro

    2011-01-01

    The electrochemical oxidation of organics in water performed in micro reactors on boron doped diamond (BDD) anode was investigated both theoretically and experimentally in order to find the influence of various operative parameters on the conversion and the current efficiency CE of the process. The electrochemical oxidation of formic acid (FA) was selected as a model case. High conversions for a single passage of the electrolytic solution inside the cell were obtained by operating with proper residence times and low distances between cathode and anode. The effect of initial concentration, flow rate and current density was investigated in detail. Theoretical predictions were in very good agreement with experimental results for both mass transfer control, oxidation reaction control and mixed kinetic regimes in spite of the fact that no adjustable parameters was used. Mass transfer process was successfully modelled by considering for simplicity a constant Sh number (e.g., a constant mass transfer coefficient k m ) for a process performed with no high values of the current intensity to minimize the effect of the gas bubbling on the flowdynamic pattern. For mixed kinetic regimes, two different modelling approaches were used. In the first one, the oxidation of organics at BDD was assumed to be mass transfer controlled and to occur with an intrinsic 100% CE when applied current density is higher than the limiting current density. In the second case, the CE of the process was modelled assuming that the competition between organic and water oxidation depends only on the electrodic material and on the nature and the concentration of the organic. In the latter case a better agreement between experimental data and theoretical predictions was observed.

  18. Delayed hydride cracking: theoretical model testing to predict cracking velocity

    International Nuclear Information System (INIS)

    Mieza, Juan I.; Vigna, Gustavo L.; Domizzi, Gladys

    2009-01-01

    Pressure tubes from Candu nuclear reactors as any other component manufactured with Zr alloys are prone to delayed hydride cracking. That is why it is important to be able to predict the cracking velocity during the component lifetime from parameters easy to be measured, such as: hydrogen concentration, mechanical and microstructural properties. Two of the theoretical models reported in literature to calculate the DHC velocity were chosen and combined, and using the appropriate variables allowed a comparison with experimental results of samples from Zr-2.5 Nb tubes with different mechanical and structural properties. In addition, velocities measured by other authors in irradiated materials could be reproduced using the model described above. (author)

  19. Quantitative aspects and dynamic modelling of glucosinolate metabolism

    DEFF Research Database (Denmark)

    Vik, Daniel

    . This enables comparison of transcript and protein levels across mutants and upon induction. I find that unchallenged plants show good correspondence between protein and transcript, but that treatment with methyljasmonate results in significant differences (chapter 1). Functional genomics are used to study......). The construction a dynamic quantitative model of GLS hydrolysis is described. Simulations reveal potential effects on auxin signalling that could reflect defensive strategies (chapter 4). The results presented grant insights into, not only the dynamics of GLS biosynthesis and hydrolysis, but also the relationship...

  20. Modeling cognitive behavior in nuclear power plants: An overview of contributing theoretical traditions

    International Nuclear Information System (INIS)

    Woods, D.D.; Roth, E.M.

    1986-01-01

    This paper reviews the major theoretical literatures that are relevant to modeling human cognitive activities important to nuclear power plant safety. The traditions considered include control theory, communication theory, statistical decision theory, information processing models and symbolic processing models. The review reveals a gradual convergence towards models that incorporate elements from multiple traditions. Models from the control theory tradition have gradually evolved to include rich knowledge representations borrowed from the symbolic processing work. At the same time theorists in the symbolic processing tradition are beginning to grapple with some of the critical issues involved in modeling complex real world domain

  1. [Social determinants of odontalgia in epidemiological studies: theoretical review and proposed conceptual model].

    Science.gov (United States)

    Bastos, João Luiz Dornelles; Gigante, Denise Petrucci; Peres, Karen Glazer; Nedel, Fúlvio Borges

    2007-01-01

    The epidemiological literature has been limited by the absence of a theoretical framework reflecting the complexity of causal mechanisms for the occurrence of health phenomena / disease conditions. In the field of oral epidemiology, such lack of theory also prevails, since dental caries the leading topic in oral research has been often studied through a biological and reductionist viewpoint. One of the most important consequences of dental caries is dental pain (odontalgia), which has received little attention in studies with sophisticated theoretical models and powerful designs to establish causal relationships. The purpose of this study is to review the scientific literature on the determinants of odontalgia and to discuss theories proposed for the explanation of the phenomenon. Conceptual models and emerging theories on the social determinants of oral health are revised, in an attempt to build up links with the bio-psychosocial pain model, proposing a more elaborate causal model for odontalgia. The framework suggests causal pathways between social structure and oral health through material, psychosocial and behavioral pathways. Aspects of the social structure are highlighted in order to relate them to odontalgia, stressing their importance in discussions of causal relationships in oral health research.

  2. By-product mutualism and the ambiguous effects of harsher environments - A game-theoretic model

    NARCIS (Netherlands)

    De Jaegher, Kris; Hoyer, Britta

    2016-01-01

    We construct two-player two-strategy game-theoretic models of by-product mutualism, where our focus lies on the way in which the probability of cooperation among players is affected by the degree of adversity facing the players. In our first model, cooperation consists of the production of a public

  3. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    Science.gov (United States)

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  4. Theoretical models for Type I and Type II supernova

    International Nuclear Information System (INIS)

    Woosley, S.E.; Weaver, T.A.

    1985-01-01

    Recent theoretical progress in understanding the origin and nature of Type I and Type II supernovae is discussed. New Type II presupernova models characterized by a variety of iron core masses at the time of collapse are presented and the sensitivity to the reaction rate 12 C(α,γ) 16 O explained. Stars heavier than about 20 M/sub solar/ must explode by a ''delayed'' mechanism not directly related to the hydrodynamical core bounce and a subset is likely to leave black hole remnants. The isotopic nucleosynthesis expected from these massive stellar explosions is in striking agreement with the sun. Type I supernovae result when an accreting white dwarf undergoes a thermonuclear explosion. The critical role of the velocity of the deflagration front in determining the light curve, spectrum, and, especially, isotopic nucleosynthesis in these models is explored. 76 refs., 8 figs

  5. Exploring the relationship between volunteering and hospice sustainability in the UK: a theoretical model.

    Science.gov (United States)

    Scott, Ros; Jindal-Snape, Divya; Manwaring, Gaye

    2018-05-02

    To explore the relationship between volunteering and the sustainability of UK voluntary hospices. A narrative literature review was conducted to inform the development of a theoretical model. Eight databases were searched: CINAHL (EBSCO), British Nursing Index, Intute: Health and Life Sciences, ERIC, SCOPUS, ASSIA (CSA), Cochrane Library and Google Scholar. A total of 90 documents were analysed. Emerging themes included the importance of volunteering to the hospice economy and workforce, the quality of services, and public and community support. Findings suggest that hospice sustainability is dependent on volunteers; however, the supply and retention of volunteers is affected by internal and external factors. A theoretical model was developed to illustrate the relationship between volunteering and hospice sustainability. It demonstrates the factors necessary for hospice sustainability and the reciprocal impact that these factors and volunteering have on each other. The model has a practical application as an assessment framework and strategic planning tool.

  6. Comparison of semi-quantitative and quantitative dynamic contrast-enhanced MRI evaluations of vertebral marrow perfusion in a rat osteoporosis model.

    Science.gov (United States)

    Zhu, Jingqi; Xiong, Zuogang; Zhang, Jiulong; Qiu, Yuyou; Hua, Ting; Tang, Guangyu

    2017-11-14

    This study aims to investigate the technical feasibility of semi-quantitative and quantitative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in the assessment of longitudinal changes of marrow perfusion in a rat osteoporosis model, using bone mineral density (BMD) measured by micro-computed tomography (micro-CT) and histopathology as the gold standards. Fifty rats were randomly assigned to the control group (n=25) and ovariectomy (OVX) group whose bilateral ovaries were excised (n=25). Semi-quantitative and quantitative DCE-MRI, micro-CT, and histopathological examinations were performed on lumbar vertebrae at baseline and 3, 6, 9, and 12 weeks after operation. The differences between the two groups in terms of semi-quantitative DCE-MRI parameter (maximum enhancement, E max ), quantitative DCE-MRI parameters (volume transfer constant, K trans ; interstitial volume, V e ; and efflux rate constant, K ep ), micro-CT parameter (BMD), and histopathological parameter (microvessel density, MVD) were compared at each of the time points using an independent-sample t test. The differences in these parameters between baseline and other time points in each group were assessed via Bonferroni's multiple comparison test. A Pearson correlation analysis was applied to assess the relationships between DCE-MRI, micro-CT, and histopathological parameters. In the OVX group, the E max values decreased significantly compared with those of the control group at weeks 6 and 9 (p=0.003 and 0.004, respectively). The K trans values decreased significantly compared with those of the control group from week 3 (pquantitative DCE-MRI, the quantitative DCE-MRI parameter K trans is a more sensitive and accurate index for detecting early reduced perfusion in osteoporotic bone.

  7. Developing a theoretical model to investigate thermal performance of a thin membrane heat-pipe solar collector

    International Nuclear Information System (INIS)

    Riffat, S.B.; Zhao, X.; Doherty, P.S.

    2005-01-01

    A thin membrane heat-pipe solar collector was designed and constructed to allow heat from solar radiation to be collected at a relatively high efficiency while keeping the capital cost low. A theoretical model incorporating a set of heat balance equations was developed to analyse heat transfer processes occurring in separate regions of the collector, i.e., the top cover, absorber and condenser/manifold areas, and examine their relationship. The thermal performance of the collector was investigated using the theoretical model. The modelling predictions were validated using the experimental data from a referred source. The test efficiency was found to be in the range 40-70%, which is a bitter lower than the values predicted by modelling. The factors influencing these results were investigated

  8. Developing a theoretical model and questionnaire survey instrument to measure the success of electronic health records in residential aged care.

    Science.gov (United States)

    Yu, Ping; Qian, Siyu

    2018-01-01

    Electronic health records (EHR) are introduced into healthcare organizations worldwide to improve patient safety, healthcare quality and efficiency. A rigorous evaluation of this technology is important to reduce potential negative effects on patient and staff, to provide decision makers with accurate information for system improvement and to ensure return on investment. Therefore, this study develops a theoretical model and questionnaire survey instrument to assess the success of organizational EHR in routine use from the viewpoint of nursing staff in residential aged care homes. The proposed research model incorporates six variables in the reformulated DeLone and McLean information systems success model: system quality, information quality, service quality, use, user satisfaction and net benefits. Two variables training and self-efficacy were also incorporated into the model. A questionnaire survey instrument was designed to measure the eight variables in the model. After a pilot test, the measurement scale was used to collect data from 243 nursing staff members in 10 residential aged care homes belonging to three management groups in Australia. Partial least squares path modeling was conducted to validate the model. The validated EHR systems success model predicts the impact of the four antecedent variables-training, self-efficacy, system quality and information quality-on the net benefits, the indicator of EHR systems success, through the intermittent variables use and user satisfaction. A 24-item measurement scale was developed to quantitatively evaluate the performance of an EHR system. The parsimonious EHR systems success model and the measurement scale can be used to benchmark EHR systems success across organizations and units and over time.

  9. A Quantitative Study on Packing Density and Pozzolanic Activity of Cementitious Materials Based on the Compaction Packing Model

    International Nuclear Information System (INIS)

    Gong, Jianqing; Chou, Kai; Huang, Zheng Yu; Zhao, Minghua

    2014-01-01

    A brief introduction to the theoretical basis of compaction packing model (CPM) and an over-view of the principle of the specific strength method provided the starting point of this study. Then, research on quantitative relations was carried out to find the correlation between the contribution rate of the pozzolanic activity and the contribution value of packing density when CPM was applied to fine powder mixture systems. The concept of the contribution value of the packing density being in direct correspondence with the contribution rate was proved by the compressive strength results and SEM images. The results indicated that the variation rule of the contribution rate of the pozzolanic activity is similar to that of the contribution value of packing density as calculated by CPM. This means the contribution value of the packing density could approximately simulate the change tendency of the contribution rate of the pozzolanic activity, which is of significant value for the future of mix designs for high and ultra-high performance concrete

  10. Quantitative modelling of HDPE spurt experiments using wall slip and generalised Newtonian flow

    NARCIS (Netherlands)

    Doelder, den C.F.J.; Koopmans, R.J.; Molenaar, J.

    1998-01-01

    A quantitative model to describe capillary rheometer experiments is presented. The model can generate ‘two-branched' discontinuous flow curves and the associated pressure oscillations. Polymer compressibility in the barrel, incompressible axisymmetric generalised Newtonian flow in the die, and a

  11. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    Science.gov (United States)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  12. Analysis of the theoretical bias in dark matter direct detection

    International Nuclear Information System (INIS)

    Catena, Riccardo

    2014-01-01

    Fitting the model ''A'' to dark matter direct detection data, when the model that underlies the data is ''B'', introduces a theoretical bias in the fit. We perform a quantitative study of the theoretical bias in dark matter direct detection, with a focus on assumptions regarding the dark matter interactions, and velocity distribution. We address this problem within the effective theory of isoscalar dark matter-nucleon interactions mediated by a heavy spin-1 or spin-0 particle. We analyze 24 benchmark points in the parameter space of the theory, using frequentist and Bayesian statistical methods. First, we simulate the data of future direct detection experiments assuming a momentum/velocity dependent dark matter-nucleon interaction, and an anisotropic dark matter velocity distribution. Then, we fit a constant scattering cross section, and an isotropic Maxwell-Boltzmann velocity distribution to the simulated data, thereby introducing a bias in the analysis. The best fit values of the dark matter particle mass differ from their benchmark values up to 2 standard deviations. The best fit values of the dark matter-nucleon coupling constant differ from their benchmark values up to several standard deviations. We conclude that common assumptions in dark matter direct detection are a source of potentially significant bias

  13. The Roy Adaptation Model: A Theoretical Framework for Nurses Providing Care to Individuals With Anorexia Nervosa.

    Science.gov (United States)

    Jennings, Karen M

    Using a nursing theoretical framework to understand, elucidate, and propose nursing research is fundamental to knowledge development. This article presents the Roy Adaptation Model as a theoretical framework to better understand individuals with anorexia nervosa during acute treatment, and the role of nursing assessments and interventions in the promotion of weight restoration. Nursing assessments and interventions situated within the Roy Adaptation Model take into consideration how weight restoration does not occur in isolation but rather reflects an adaptive process within external and internal environments, and has the potential for more holistic care.

  14. Allostatic load: A theoretical model for understanding the relationship between maternal posttraumatic stress disorder and adverse birth outcomes.

    Science.gov (United States)

    Li, Yang; Rosemberg, Marie-Anne Sanon; Seng, Julia S

    2018-07-01

    Adverse birth outcomes such as preterm birth and low birth weight are significant public health concerns and contribute to neonatal morbidity and mortality. Studies have increasingly been exploring the predictive effects of maternal posttraumatic stress disorder (PTSD) on adverse birth outcomes. However, the biological mechanisms by which maternal PTSD affects birth outcomes are not well understood. Allostatic load refers to the cumulative dysregulations of the multiple physiological systems as a response to multiple social-ecological levels of chronic stress. Allostatic load has been well documented in relation to both chronic stress and adverse health outcomes in non-pregnant populations. However, the mediating role of allostatic load is less understood when it comes to maternal PTSD and adverse birth outcomes. To propose a theoretical model that depicts how allostatic load could mediate the impact of maternal PTSD on birth outcomes. We followed the procedures for theory synthesis approach described by Walker and Avant (2011), including specifying focal concepts, identifying related factors and relationships, and constructing an integrated representation. We first present a theoretical overview of the allostatic load theory and the other 4 relevant theoretical models. Then we provide a brief narrative review of literature that empirically supports the propositions of the integrated model. Finally, we describe our theoretical model. The theoretical model synthesized has the potential to advance perinatal research by delineating multiple biomarkers to be used in future. After it is well validated, it could be utilized as the theoretical basis for health care professionals to identify high-risk women by evaluating their experiences of psychosocial and traumatic stress and to develop and evaluate service delivery and clinical interventions that might modify maternal perceptions or experiences of stress and eliminate their impacts on adverse birth outcomes. Copyright

  15. The Janus fluid a theoretical perspective

    CERN Document Server

    Fantoni, Riccardo

    2013-01-01

    The state-of-the-art in the theoretical statistical physics treatment of the Janus fluid is reported with a bridge between new research results published in journal articles and a contextual literature review. Recent Monte Carlo simulations on the Kern and Frenkel model of the Janus fluid have revealed that in the vapor phase, below the critical point, there is the formation of preferred inert clusters made up of a well-defined number of particles: the micelles and the vesicles. This is responsible for a re-entrant gas branch of the gas-liquid binodal. Detailed account of this findings are given in the first chapter where the Janus fluid is introduced as a product of new sophisticated synthesis laboratory techniques. In the second chapter a cluster theory is developed to approximate the exact clustering properties stemming from the simulations. It is shown that the theory is able to reproduce semi-quantitatively the micellization phenomenon.

  16. A quantitative and dynamic model of the Arabidopsis flowering time gene regulatory network.

    Directory of Open Access Journals (Sweden)

    Felipe Leal Valentim

    Full Text Available Various environmental signals integrate into a network of floral regulatory genes leading to the final decision on when to flower. Although a wealth of qualitative knowledge is available on how flowering time genes regulate each other, only a few studies incorporated this knowledge into predictive models. Such models are invaluable as they enable to investigate how various types of inputs are combined to give a quantitative readout. To investigate the effect of gene expression disturbances on flowering time, we developed a dynamic model for the regulation of flowering time in Arabidopsis thaliana. Model parameters were estimated based on expression time-courses for relevant genes, and a consistent set of flowering times for plants of various genetic backgrounds. Validation was performed by predicting changes in expression level in mutant backgrounds and comparing these predictions with independent expression data, and by comparison of predicted and experimental flowering times for several double mutants. Remarkably, the model predicts that a disturbance in a particular gene has not necessarily the largest impact on directly connected genes. For example, the model predicts that SUPPRESSOR OF OVEREXPRESSION OF CONSTANS (SOC1 mutation has a larger impact on APETALA1 (AP1, which is not directly regulated by SOC1, compared to its effect on LEAFY (LFY which is under direct control of SOC1. This was confirmed by expression data. Another model prediction involves the importance of cooperativity in the regulation of APETALA1 (AP1 by LFY, a prediction supported by experimental evidence. Concluding, our model for flowering time gene regulation enables to address how different quantitative inputs are combined into one quantitative output, flowering time.

  17. Theoretical Background for the Decision-Making Process Modelling under Controlled Intervention Conditions

    Directory of Open Access Journals (Sweden)

    Bakanauskienė Irena

    2017-12-01

    Full Text Available This article is intended to theoretically justify the decision-making process model for the cases, when active participation of investing entities in controlling the activities of an organisation and their results is noticeable. Based on scientific literature analysis, a concept of controlled conditions is formulated, and using a rational approach to the decision-making process, a model of the 11-steps decision-making process under controlled intervention is presented. Also, there have been unified conditions, describing the case of controlled interventions thus providing preconditions to ensure the adequacy of the proposed decision-making process model.

  18. Theoretical modeling of a new structure of III-V tandem solar cells by ...

    African Journals Online (AJOL)

    junction solar cell is theoretically investigated by optimizing the thickness of GaAs and GaInPandusing a new optical model to separate the junction between the two solar cell in order to solve problems of tunnel junction and difficulties of fabrication.

  19. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 3

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Corum, J.M.; Bryson, J.W.

    1975-06-01

    The third in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: the experimental data provide design information directly applicable to nozzles in cylindrical vessels; and the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 3 had a 10 in. OD and the nozzle had a 1.29 in. OD, giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios for the cylinder and the nozzle were 50 and 7.68 respectively. Thirteen separate loading cases were analyzed. In each, one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for all the loadings were obtained using 158 three-gage strain rosettes located on the inner and outer surfaces. The loading cases were also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  20. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 4

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-06-01

    The last in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models in the series are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: (1) the experimental data provide design information directly applicable to nozzles in cylindrical vessels, and (2) the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 4 had an outside diameter of 10 in., and the nozzle had an outside diameter of 1.29 in., giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios were 50 and 20.2 for the cylinder and nozzle respectively. Thirteen separate loading cases were analyzed. For each loading condition one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for each of the 13 loadings were obtained using 157 three-gage strain rosettes located on the inner and outer surfaces. Each of the 13 loading cases was also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  1. Slow dynamics at critical points: the field-theoretical perspective

    International Nuclear Information System (INIS)

    Gambassi, Andrea

    2006-01-01

    The dynamics at a critical point provides a simple instance of slow collective evolution, characterised by aging phenomena and by a violation of the fluctuation-dissipation relation even for long times. By virtue of the universality in critical phenomena it is possible to provide quantitative predictions for some aspects of these behaviours by field-theoretical methods. We review some of the theoretical results that have been obtained in recent years for the relevant (universal) quantities, such as the fluctuation-dissipation ratio, associated with the non-equilibrium critical dynamics

  2. Mechanisms of plasma-assisted catalyzed growth of carbon nanofibres: a theoretical modeling

    Science.gov (United States)

    Gupta, R.; Sharma, S. C.; Sharma, R.

    2017-02-01

    A theoretical model is developed to study the nucleation and catalytic growth of carbon nanofibers (CNFs) in a plasma environment. The model includes the charging of CNFs, the kinetics of the plasma species (neutrals, ions and electrons), plasma pretreatment of the catalyst film, and various processes unique to a plasma-exposed catalyst surface such as adsorption of neutrals, thermal dissociation of neutrals, ion induced dissociation, interaction between neutral species, stress exerted by the growing graphene layers and the growth of CNFs. Numerical calculations are carried out for typical glow discharge plasma parameters. It is found that the growth rate of CNFs decreases with the catalyst nanoparticle size. In addition, the effect of hydrogen on the catalyst nanoparticle size, CNF tip diameter, CNF growth rate, and the tilt angle of the graphene layers to the fiber axis are investigated. Moreover, it is also found that the length of CNFs increases with hydrocarbon number density. Our theoretical findings are in good agreement with experimental observations and can be extended to enhance the field emission characteristics of CNFs.

  3. A theoretical intellectual capital model applied to cities

    Directory of Open Access Journals (Sweden)

    José Luis Alfaro Navarro

    2013-06-01

    Full Text Available New Management Information Systems (MIS are necessary at local level as the main source of wealth creation. Therefore, tools and approaches that provide a full future vision of any organization should be a strategic priority for economic development. In this line, cities are “centers of knowledge and sources of growth and innovation” and integrated urban development policies are necessary. These policies support communication networks and optimize location structures as strategies that provide opportunities for social and democratic participation for the citizens. This paper proposes a theoretical model to measure and evaluate the cities intellectual capital that allows determine what we must take into account to make cities a source of wealth, prosperity, welfare and future growth. Furthermore, local intellectual capital provides a long run vision. Thus, in this paper we develop and explain how to implement a model to estimate intellectual capital in cities. In this sense, our proposal is to provide a model for measuring and managing intellectual capital using socio-economic indicators for cities. These indicators offer a long term picture supported by a comprehensive strategy for those who occupy the local space, infrastructure for implementation and management of the environment for its development.

  4. Hartree-Fock-Bogoliubov model: a theoretical and numerical perspective

    International Nuclear Information System (INIS)

    Paul, S.

    2012-01-01

    This work is devoted to the theoretical and numerical study of Hartree-Fock-Bogoliubov (HFB) theory for attractive quantum systems, which is one of the main methods in nuclear physics. We first present the model and its main properties, and then explain how to get numerical solutions. We prove some convergence results, in particular for the simple fixed point algorithm (sometimes called Roothaan). We show that it converges, or oscillates between two states, none of them being a solution. This generalizes to the HFB case previous results of Cances and Le Bris for the simpler Hartree-Fock model in the repulsive case. Following these authors, we also propose a relaxed constraint algorithm for which convergence is guaranteed. In the last part of the thesis, we illustrate the behavior of these algorithms by some numerical experiments. We first consider a system where the particles only interact through the Newton potential. Our numerical results show that the pairing matrix never vanishes, a fact that has not yet been proved rigorously. We then study a very simplified model for protons and neutrons in a nucleus. (author)

  5. Comparison of Quantitative and Qualitative Research Traditions: Epistemological, Theoretical, and Methodological Differences

    Science.gov (United States)

    Yilmaz, Kaya

    2013-01-01

    There has been much discussion about quantitative and qualitative approaches to research in different disciplines. In the behavioural and social sciences, these two paradigms are compared to reveal their relative strengths and weaknesses. But the debate about both traditions has commonly taken place in academic books. It is hard to find an article…

  6. Wires in the soup: quantitative models of cell signaling

    Science.gov (United States)

    Cheong, Raymond; Levchenko, Andre

    2014-01-01

    Living cells are capable of extracting information from their environments and mounting appropriate responses to a variety of associated challenges. The underlying signal transduction networks enabling this can be quite complex, necessitating for their unraveling by sophisticated computational modeling coupled with precise experimentation. Although we are still at the beginning of this process, some recent examples of integrative analysis of cell signaling are very encouraging. This review highlights the case of the NF-κB pathway in order to illustrate how a quantitative model of a signaling pathway can be gradually constructed through continuous experimental validation, and what lessons one might learn from such exercises. PMID:18291655

  7. An assessment of some theoretical models used for the calculation of the refractive index of InXGa1-xAs

    Science.gov (United States)

    Engelbrecht, J. A. A.

    2018-04-01

    Theoretical models used for the determination of the refractive index of InXGa1-XAs are reviewed and compared. Attention is drawn to some problems experienced with some of the models. Models also extended to the mid-infrared region of the electromagnetic spectrum. Theoretical results in the mid-infrared region are then compared to previously published experimental results.

  8. A quantitative analysis of instabilities in the linear chiral sigma model

    International Nuclear Information System (INIS)

    Nemes, M.C.; Nielsen, M.; Oliveira, M.M. de; Providencia, J. da

    1990-08-01

    We present a method to construct a complete set of stationary states corresponding to small amplitude motion which naturally includes the continuum solution. The energy wheighted sum rule (EWSR) is shown to provide for a quantitative criterium on the importance of instabilities which is known to occur in nonasymptotically free theories. Out results for the linear σ model showed be valid for a large class of models. A unified description of baryon and meson properties in terms of the linear σ model is also given. (author)

  9. Evaluating quantitative and qualitative models: An application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the

  10. Evaluating quantitative and qualitative models: an application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the

  11. A New Theoretical Approach to Postsecondary Student Disability: Disability-Diversity (Dis)Connect Model

    Science.gov (United States)

    Aquino, Katherine C.

    2016-01-01

    Disability is often viewed as an obstacle to postsecondary inclusion, but not a characteristic of student diversity. Additionally, current theoretical frameworks isolate disability from other student diversity characteristics. In response, a new conceptual framework, the Disability-Diversity (Dis)Connect Model (DDDM), was created to address…

  12. A Theoretical Model for Meaning Construction through Constructivist Concept Learning

    DEFF Research Database (Denmark)

    Badie, Farshad

    The central focus of this Ph.D. research is on ‘Logic and Cognition’ and, more specifically, this research covers the quintuple (Logic and Logical Philosophy, Philosophy of Education, Educational Psychology, Cognitive Science, Computer Science). The most significant contributions of this Ph.D. di...... of ‘learning’, ‘mentoring’, and ‘knowledge’ within learning and knowledge acquisition systems. Constructivism as an epistemology and as a model of knowing and, respectively as a theoretical model of learning builds up the central framework of this research........D. dissertation are conceptual, logical, terminological, and semantic analysis of Constructivist Concept Learning (specifically, in the context of humans’ interactions with their environment and with other agents). This dissertation is concerned with the specification of the conceptualisation of the phenomena...

  13. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  14. Droplet size in flow: Theoretical model and application to polymer blends

    Science.gov (United States)

    Fortelný, Ivan; Jůza, Josef

    2017-05-01

    The paper is focused on prediction of the average droplet radius, R, in flowing polymer blends where the droplet size is determined by dynamic equilibrium between the droplet breakup and coalescence. Expressions for the droplet breakup frequency in systems with low and high contents of the dispersed phase are derived using available theoretical and experimental results for model blends. Dependences of the coalescence probability, Pc, on system parameters, following from recent theories, is considered and approximate equation for Pc in a system with a low polydispersity in the droplet size is proposed. Equations for R in systems with low and high contents of the dispersed phase are derived. Combination of these equations predicts realistic dependence of R on the volume fraction of dispersed droplets, φ. Theoretical prediction of the ratio of R to the critical droplet radius at breakup agrees fairly well with experimental values for steadily mixed polymer blends.

  15. INTRODUCTION: Theoretical Models as Mass Media Practice: Perspectives from the West

    DEFF Research Database (Denmark)

    Thomsen, Line

    2007-01-01

    What is journalism? How does it exist and why? How does journalism define itself and in what ways can we make use of looking theoretically at the practice of it? These were the central themes of our workshop; Theoretical Models as Mass Media Practice held at the ‘Minding the Gap’ conference...... an exceptional framework for understanding the workings of mass media while helping the press reflect over these workings too. In a time of change for the journalistic profession, when media convergence is growing; the media is marked by deregulation and fewer journalists are being asked to do more...... at Reuters Institute in May 2007, from which this collection of papers has been selected. As with the other workshops during the conference, the majority of our panellists were themselves once media practitioners. It is my opinion that this background and inside knowledge of the field in itself can provide...

  16. Activity systems modeling as a theoretical lens for social exchange studies

    Directory of Open Access Journals (Sweden)

    Ernest Jones

    2016-01-01

    Full Text Available The social exchange perspective seeks to acknowledge, understand and predict the dynamics of social interactions. Empirical research involving social exchange constructs have grown to be highly technical including confirmatory factor analysis to assess construct distinctiveness and structural equation modeling to assess construct causality. Each study seemingly strives to assess how underlying social exchange theoretic constructs interrelate. Yet despite this methodological depth and resultant explanatory and predictive power, a significant number of studies report findings that, once synthesized, suggest an underlying persistent threat of conceptual or construct validity brought about by a search for epistemological parsimony. Further, it is argued that a methodological approach that embraces inherent complexity such as activity systems modeling facilitates the search for simplified models while not ignoring contextual factors.

  17. Advanced quantitative measurement methodology in physics education research

    Science.gov (United States)

    Wang, Jing

    The ultimate goal of physics education research (PER) is to develop a theoretical framework to understand and improve the learning process. In this journey of discovery, assessment serves as our headlamp and alpenstock. It sometimes detects signals in student mental structures, and sometimes presents the difference between expert understanding and novice understanding. Quantitative assessment is an important area in PER. Developing research-based effective assessment instruments and making meaningful inferences based on these instruments have always been important goals of the PER community. Quantitative studies are often conducted to provide bases for test development and result interpretation. Statistics are frequently used in quantitative studies. The selection of statistical methods and interpretation of the results obtained by these methods shall be connected to the education background. In this connecting process, the issues of educational models are often raised. Many widely used statistical methods do not make assumptions on the mental structure of subjects, nor do they provide explanations tailored to the educational audience. There are also other methods that consider the mental structure and are tailored to provide strong connections between statistics and education. These methods often involve model assumption and parameter estimation, and are complicated mathematically. The dissertation provides a practical view of some advanced quantitative assessment methods. The common feature of these methods is that they all make educational/psychological model assumptions beyond the minimum mathematical model. The purpose of the study is to provide a comparison between these advanced methods and the pure mathematical methods. The comparison is based on the performance of the two types of methods under physics education settings. In particular, the comparison uses both physics content assessments and scientific ability assessments. The dissertation includes three

  18. A theoretical and empirical evaluation and extension of the Todaro migration model.

    Science.gov (United States)

    Salvatore, D

    1981-11-01

    "This paper postulates that it is theoretically and empirically preferable to base internal labor migration on the relative difference in rural-urban real income streams and rates of unemployment, taken as separate and independent variables, rather than on the difference in the expected real income streams as postulated by the very influential and often quoted Todaro model. The paper goes on to specify several important ways of extending the resulting migration model and improving its empirical performance." The analysis is based on Italian data. excerpt

  19. A reduced theoretical model for estimating condensation effects in combustion-heated hypersonic tunnel

    Science.gov (United States)

    Lin, L.; Luo, X.; Qin, F.; Yang, J.

    2018-03-01

    As one of the combustion products of hydrocarbon fuels in a combustion-heated wind tunnel, water vapor may condense during the rapid expansion process, which will lead to a complex two-phase flow inside the wind tunnel and even change the design flow conditions at the nozzle exit. The coupling of the phase transition and the compressible flow makes the estimation of the condensation effects in such wind tunnels very difficult and time-consuming. In this work, a reduced theoretical model is developed to approximately compute the nozzle-exit conditions of a flow including real-gas and homogeneous condensation effects. Specifically, the conservation equations of the axisymmetric flow are first approximated in the quasi-one-dimensional way. Then, the complex process is split into two steps, i.e., a real-gas nozzle flow but excluding condensation, resulting in supersaturated nozzle-exit conditions, and a discontinuous jump at the end of the nozzle from the supersaturated state to a saturated state. Compared with two-dimensional numerical simulations implemented with a detailed condensation model, the reduced model predicts the flow parameters with good accuracy except for some deviations caused by the two-dimensional effect. Therefore, this reduced theoretical model can provide a fast, simple but also accurate estimation of the condensation effect in combustion-heated hypersonic tunnels.

  20. Review of progress in quantitative nondestructive evaluation

    International Nuclear Information System (INIS)

    Thompson, D.O.; Chimenti, D.E.

    1983-01-01

    A comprehensive review of the current state of quantitative nondestructive evaluation (NDE), this volume brings together papers by researchers working in government, private industry, and university laboratories. Their papers cover a wide range of interests and concerns for researchers involved in theoretical and applied aspects of quantitative NDE. Specific topics examined include reliability probability of detection--ultrasonics and eddy currents weldments closure effects in fatigue cracks technology transfer ultrasonic scattering theory acoustic emission ultrasonic scattering, reliability and penetrating radiation metal matrix composites ultrasonic scattering from near-surface flaws ultrasonic multiple scattering

  1. A quantitative evaluation of multiple biokinetic models using an assembled water phantom: A feasibility study.

    Directory of Open Access Journals (Sweden)

    Da-Ming Yeh

    Full Text Available This study examined the feasibility of quantitatively evaluating multiple biokinetic models and established the validity of the different compartment models using an assembled water phantom. Most commercialized phantoms are made to survey the imaging system since this is essential to increase the diagnostic accuracy for quality assurance. In contrast, few customized phantoms are specifically made to represent multi-compartment biokinetic models. This is because the complicated calculations as defined to solve the biokinetic models and the time-consuming verifications of the obtained solutions are impeded greatly the progress over the past decade. Nevertheless, in this work, five biokinetic models were separately defined by five groups of simultaneous differential equations to obtain the time-dependent radioactive concentration changes inside the water phantom. The water phantom was assembled by seven acrylic boxes in four different sizes, and the boxes were linked to varying combinations of hoses to signify the multiple biokinetic models from the biomedical perspective. The boxes that were connected by hoses were then regarded as a closed water loop with only one infusion and drain. 129.1±24.2 MBq of Tc-99m labeled methylene diphosphonate (MDP solution was thoroughly infused into the water boxes before gamma scanning; then the water was replaced with de-ionized water to simulate the biological removal rate among the boxes. The water was driven by an automatic infusion pump at 6.7 c.c./min, while the biological half-life of the four different-sized boxes (64, 144, 252, and 612 c.c. was 4.8, 10.7, 18.8, and 45.5 min, respectively. The five models of derived time-dependent concentrations for the boxes were estimated either by a self-developed program run in MATLAB or by scanning via a gamma camera facility. Either agreement or disagreement between the practical scanning and the theoretical prediction in five models was thoroughly discussed. The

  2. Theoretical Framework and Model Design for Beautiful Countryside Construction in China

    Directory of Open Access Journals (Sweden)

    ZHENG Xiang-qun

    2015-04-01

    Full Text Available In the context of China today, the process of beautiful countryside construction mainly imitates the patterns of‘urbanization’construction. However, this approach leads to the loss of countryside characteristics and the separation of agricultural culture. Therefore, it's urgent to carry out research of the theoretical framework and model design for beautiful countryside construction. In this paper, based on the analysis of the beautiful countryside construction connotation, the basic theory of beautiful countryside construction was summarized in three aspects: rural complex ecosystem model, multi-functionality of rural model and sustainable development evaluation model. The basic idea of the beautiful countryside construction mode was studied. The design method of beautiful countryside construction mode was proposed in three levels: planning, scheming and evaluating. The research results might offer scientific reference for improving the scientific and operational nature of beautiful countryside construction.

  3. Quantitative Models of Imperfect Deception in Network Security using Signaling Games with Evidence

    OpenAIRE

    Pawlick, Jeffrey; Zhu, Quanyan

    2017-01-01

    Deception plays a critical role in many interactions in communication and network security. Game-theoretic models called "cheap talk signaling games" capture the dynamic and information asymmetric nature of deceptive interactions. But signaling games inherently model undetectable deception. In this paper, we investigate a model of signaling games in which the receiver can detect deception with some probability. This model nests traditional signaling games and complete information Stackelberg ...

  4. Experimental and theoretical study of magnetohydrodynamic ship models.

    Science.gov (United States)

    Cébron, David; Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe

    2017-01-01

    Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques.

  5. Experimental and theoretical study of magnetohydrodynamic ship models.

    Directory of Open Access Journals (Sweden)

    David Cébron

    Full Text Available Magnetohydrodynamic (MHD ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques.

  6. Status of molten fuel coolant interaction studies and theoretical modelling work at IGCAR

    International Nuclear Information System (INIS)

    Rao, P.B.; Singh, Om Pal; Singh, R.S.

    1994-01-01

    The status of Molten Fuel Coolant Interaction (MFCI) studies is reviewed and some of the important observations made are presented. A new model for MFCI that is developed at IGCAR by considering the various mechanisms in detail is described. The model is validated and compared with the available experimental data and theoretical work at different stages of its development. Several parametric studies that are carried using this model are described. The predictions from this model have been found to be satisfactory, considering the complexity of the MFCI. A need for more comprehensive and MFCI-specific experimental tests is brought out. (author)

  7. Praxis and reflexivity for interprofessional education: towards an inclusive theoretical framework for learning.

    Science.gov (United States)

    Hutchings, Maggie; Scammell, Janet; Quinney, Anne

    2013-09-01

    While there is growing evidence of theoretical perspectives adopted in interprofessional education, learning theories tend to foreground the individual, focusing on psycho-social aspects of individual differences and professional identity to the detriment of considering social-structural factors at work in social practices. Conversely socially situated practice is criticised for being context-specific, making it difficult to draw generalisable conclusions for improving interprofessional education. This article builds on a theoretical framework derived from earlier research, drawing on the dynamics of Dewey's experiential learning theory and Archer's critical realist social theory, to make a case for a meta-theoretical framework enabling social-constructivist and situated learning theories to be interlinked and integrated through praxis and reflexivity. Our current analysis is grounded in an interprofessional curriculum initiative mediated by a virtual community peopled by health and social care users. Student perceptions, captured through quantitative and qualitative data, suggest three major disruptive themes, creating opportunities for congruence and disjuncture and generating a model of zones of interlinked praxis associated with professional differences and identity, pedagogic strategies and technology-mediated approaches. This model contributes to a framework for understanding the complexity of interprofessional learning and offers bridges between individual and structural factors for engaging with the enablements and constraints at work in communities of practice and networks for interprofessional education.

  8. Representing general theoretical concepts in structural equation models: The role of composite variables

    Science.gov (United States)

    Grace, J.B.; Bollen, K.A.

    2008-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.

  9. 137Cs applicability to soil erosion assessment: theoretical and empirical model

    International Nuclear Information System (INIS)

    Andrello, Avacir Casanova

    2004-02-01

    The soil erosion processes acceleration and the increase of soil erosion rates due to anthropogenic perturbation in soil-weather-vegetation equilibrium has influenced in the soil quality and environment. So, the possibility to assess the amplitude and severity of soil erosion impact on the productivity and quality of soil is important so local scale as regional and global scale. Several models have been developed to assess the soil erosion so qualitative as quantitatively. 137 Cs, an anthropogenic radionuclide, have been very used to assess the superficial soil erosion process Empirical and theoretical models were developed on the basis of 137 Cs redistribution as indicative of soil movement by erosive process These models incorporate many parameters that can influence in the soil erosion rates quantification by 137 Cs redistribution. Statistical analysis was realized on the models recommended by IAEA to determinate the influence that each parameter generates in results of the soil redistribution. It was verified that the most important parameter is the 137 Cs redistribution, indicating the necessity of a good determination in the 137 Cs inventory values with a minimum deviation associated with these values. After this, it was associated a 10% deviation in the reference value of 137 Cs inventory and the 5% in the 137 Cs inventory of the sample and was determinate the deviation in results of the soil redistribution calculated by models. The results of soil redistribution was compared to verify if there was difference between the models, but there was not difference in the results determinate by models, unless above 70% of 137 Cs loss. Analyzing three native forests and an area of the undisturbed pasture in the Londrina region, can be verified that the 137 Cs spatial variability in local scale was 15%. Comparing the 137 Cs inventory values determinate in the three native forest with the 137 Cs inventory value determinate in the area of undisturbed pasture in the

  10. Modeling logistic performance in quantitative microbial risk assessment.

    Science.gov (United States)

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  11. Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: Broadening the matrix approach.

    Science.gov (United States)

    van Grootel, Leonie; van Wesel, Floryt; O'Mara-Eves, Alison; Thomas, James; Hox, Joop; Boeije, Hennie

    2017-09-01

    This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the qualitative evidence synthesis against interventions in primary quantitative studies. However, types of qualitative evidence syntheses that are associated with theory building generate theoretical models instead of recommendations. Therefore, the output from these types of qualitative evidence syntheses cannot directly be used for the matrix approach but requires transformation. This approach allows for the transformation of these types of output. The approach enables the inference of moderation effects instead of direct effects from the theoretical model developed in a qualitative evidence synthesis. Recommendations for practice are formulated on the basis of interactional relations inferred from the qualitative evidence synthesis. In doing so, we apply the realist perspective to model variables from the qualitative evidence synthesis according to the context-mechanism-outcome configuration. A worked example shows that it is possible to identify recommendations from a theory-building qualitative evidence synthesis using the realist perspective. We created subsets of the interventions from primary quantitative studies based on whether they matched the recommendations or not and compared the weighted mean effect sizes of the subsets. The comparison shows a slight difference in effect sizes between the groups of studies. The study concludes that the approach enhances the applicability of the matrix approach. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Quantitative and Functional Requirements for Bioluminescent Cancer Models.

    Science.gov (United States)

    Feys, Lynn; Descamps, Benedicte; Vanhove, Christian; Vermeulen, Stefan; Vandesompele, J O; Vanderheyden, Katrien; Messens, Kathy; Bracke, Marc; De Wever, Olivier

    2016-01-01

    Bioluminescent cancer models are widely used but detailed quantification of the luciferase signal and functional comparison with a non-transfected control cell line are generally lacking. In the present study, we provide quantitative and functional tests for luciferase-transfected cells. We quantified the luciferase expression in BLM and HCT8/E11 transfected cancer cells, and examined the effect of long-term luciferin exposure. The present study also investigated functional differences between parental and transfected cancer cells. Our results showed that quantification of different single-cell-derived populations are superior with droplet digital polymerase chain reaction. Quantification of luciferase protein level and luciferase bioluminescent activity is only useful when there is a significant difference in copy number. Continuous exposure of cell cultures to luciferin leads to inhibitory effects on mitochondrial activity, cell growth and bioluminescence. These inhibitory effects correlate with luciferase copy number. Cell culture and mouse xenograft assays showed no significant functional differences between luciferase-transfected and parental cells. Luciferase-transfected cells should be validated by quantitative and functional assays before starting large-scale experiments. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  13. Comparison between theoretical and experimental results of the 1/6 scale concrete model under internal pressure

    International Nuclear Information System (INIS)

    Riviere, J.; Barbe, B.; Millard, A.; Koundy, V.

    1988-01-01

    The prevision of the behavior of the 1/6 scale concrete model under internal pressure was realized by means of two computations, the first one with an infinite soil rigidity, the second one with a soil rigidity equal to 61.26 MPa/m. These two computations, that assumed a perfectly axisymetric structure gave theoretical and experimental results in good agreement, except the raft of which the theoretical uplift was three times higher than the experimental one. The main conclusions of this study are as follow: the soil stiffness has no influence on the ultimate behavior of the model, the dead concrete rigidity decreases the raft uplift in an important way, the model is destroyed because the hoop stress reaches the ultimate strength

  14. Integration of CFD codes and advanced combustion models for quantitative burnout determination

    Energy Technology Data Exchange (ETDEWEB)

    Javier Pallares; Inmaculada Arauzo; Alan Williams [University of Zaragoza, Zaragoza (Spain). Centre of Research for Energy Resources and Consumption (CIRCE)

    2007-10-15

    CFD codes and advanced kinetics combustion models are extensively used to predict coal burnout in large utility boilers. Modelling approaches based on CFD codes can accurately solve the fluid dynamics equations involved in the problem but this is usually achieved by including simple combustion models. On the other hand, advanced kinetics combustion models can give a detailed description of the coal combustion behaviour by using a simplified description of the flow field, this usually being obtained from a zone-method approach. Both approximations describe correctly general trends on coal burnout, but fail to predict quantitative values. In this paper a new methodology which takes advantage of both approximations is described. In the first instance CFD solutions were obtained of the combustion conditions in the furnace in the Lamarmora power plant (ASM Brescia, Italy) for a number of different conditions and for three coals. Then, these furnace conditions were used as inputs for a more detailed chemical combustion model to predict coal burnout. In this, devolatilization was modelled using a commercial macromolecular network pyrolysis model (FG-DVC). For char oxidation an intrinsic reactivity approach including thermal annealing, ash inhibition and maceral effects, was used. Results from the simulations were compared against plant experimental values, showing a reasonable agreement in trends and quantitative values. 28 refs., 4 figs., 4 tabs.

  15. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Cobbs Gary

    2012-08-01

    Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the

  16. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    Science.gov (United States)

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

  17. Predictive value of EEG in postanoxic encephalopathy: A quantitative model-based approach.

    Science.gov (United States)

    Efthymiou, Evdokia; Renzel, Roland; Baumann, Christian R; Poryazova, Rositsa; Imbach, Lukas L

    2017-10-01

    The majority of comatose patients after cardiac arrest do not regain consciousness due to severe postanoxic encephalopathy. Early and accurate outcome prediction is therefore essential in determining further therapeutic interventions. The electroencephalogram is a standardized and commonly available tool used to estimate prognosis in postanoxic patients. The identification of pathological EEG patterns with poor prognosis relies however primarily on visual EEG scoring by experts. We introduced a model-based approach of EEG analysis (state space model) that allows for an objective and quantitative description of spectral EEG variability. We retrospectively analyzed standard EEG recordings in 83 comatose patients after cardiac arrest between 2005 and 2013 in the intensive care unit of the University Hospital Zürich. Neurological outcome was assessed one month after cardiac arrest using the Cerebral Performance Category. For a dynamic and quantitative EEG analysis, we implemented a model-based approach (state space analysis) to quantify EEG background variability independent from visual scoring of EEG epochs. Spectral variability was compared between groups and correlated with clinical outcome parameters and visual EEG patterns. Quantitative assessment of spectral EEG variability (state space velocity) revealed significant differences between patients with poor and good outcome after cardiac arrest: Lower mean velocity in temporal electrodes (T4 and T5) was significantly associated with poor prognostic outcome (pEEG patterns such as generalized periodic discharges (pEEG analysis (state space analysis) provides a novel, complementary marker for prognosis in postanoxic encephalopathy. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A novel game theoretic approach for modeling competitive information diffusion in social networks with heterogeneous nodes

    Science.gov (United States)

    Agha Mohammad Ali Kermani, Mehrdad; Fatemi Ardestani, Seyed Farshad; Aliahmadi, Alireza; Barzinpour, Farnaz

    2017-01-01

    Influence maximization deals with identification of the most influential nodes in a social network given an influence model. In this paper, a game theoretic framework is developed that models a competitive influence maximization problem. A novel competitive influence model is additionally proposed that incorporates user heterogeneity, message content, and network structure. The proposed game-theoretic model is solved using Nash Equilibrium in a real-world dataset. It is shown that none of the well-known strategies are stable and at least one player has the incentive to deviate from the proposed strategy. Moreover, violation of Nash equilibrium strategy by each player leads to their reduced payoff. Contrary to previous works, our results demonstrate that graph topology, as well as the nodes' sociability and initial tendency measures have an effect on the determination of the influential node in the network.

  19. Accuracy Analysis of a Box-wing Theoretical SRP Model

    Science.gov (United States)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  20. The relationship between structural and functional connectivity: graph theoretical analysis of an EEG neural mass model

    NARCIS (Netherlands)

    Ponten, S.C.; Daffertshofer, A.; Hillebrand, A.; Stam, C.J.

    2010-01-01

    We investigated the relationship between structural network properties and both synchronization strength and functional characteristics in a combined neural mass and graph theoretical model of the electroencephalogram (EEG). Thirty-two neural mass models (NMMs), each representing the lump activity

  1. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  2. Theoretical model for ultracold molecule formation via adaptive feedback control

    International Nuclear Information System (INIS)

    Poschinger, Ulrich; Salzmann, Wenzel; Wester, Roland; Weidemueller, Matthias; Koch, Christiane P; Kosloff, Ronnie

    2006-01-01

    We theoretically investigate pump-dump photoassociation of ultracold molecules with amplitude- and phase-modulated femtosecond laser pulses. For this purpose, a perturbative model for light-matter interaction is developed and combined with a genetic algorithm for adaptive feedback control of the laser pulse shapes. The model is applied to the formation of 85 Rb 2 molecules in a magneto-optical trap. We find that optimized pulse shapes may maximize the formation of ground state molecules in a specific vibrational state at a pump-dump delay time for which unshaped pulses lead to a minimum of the formation rate. Compared to the maximum formation rate obtained for unshaped pulses at the optimum pump-dump delay, the optimized pulses lead to a significant improvement of about 40% for the target level population. Since our model yields the spectral amplitudes and phases of the optimized pulses, the results are directly applicable in pulse shaping experiments

  3. Quantitation in planar renal scintigraphy: which μ value should be used?

    International Nuclear Information System (INIS)

    Hindie, E.; Jeanguillaume, C.; Galle, P.; Prigent, A.

    1999-01-01

    The attenuation coefficient value μ used by different authors for quantitation in planar renal scintigraphy varies greatly, from the theoretical value of 0.153 cm -1 (appropriate for scatter-free data) down to 0.099 cm -1 (empirical value assumed to compensate for both scatter and attenuation). For a 6-cm-deep kidney, such variations introduce up to 30% differences in absolute measurement of kidney activity. Using technetium-99m phantom studies, we determined the μ values that would yield accurate kidney activity quantitation for different energy windows corresponding to different amounts of scatter, and when using different image analysis approaches similar to those used in renal quantitation. With the 20% energy window, it was found that the μ value was strongly dependent on the size of the region of interest (ROI) and on whether background subtraction was performed: the μ value thus varied from 0.119 cm -1 (loose ROI, no background subtraction) to 0.150 cm -1 (kidney ROI and background subtraction). When using data from an energy window that could be considered scatter-free, the μ value became almost independent of the image analysis scheme. It is concluded that: (1) when performing background subtraction, which implicitly reduces the effect of scatter, the μ value to be used for accurate quantitation is close to the theoretical μ value; (2) if the acquired data were initially corrected for scatter, the appropriate μ value would then be the theoretical μ value, whatever the image analysis scheme. (orig.)

  4. Computational complexity a quantitative perspective

    CERN Document Server

    Zimand, Marius

    2004-01-01

    There has been a common perception that computational complexity is a theory of "bad news" because its most typical results assert that various real-world and innocent-looking tasks are infeasible. In fact, "bad news" is a relative term, and, indeed, in some situations (e.g., in cryptography), we want an adversary to not be able to perform a certain task. However, a "bad news" result does not automatically become useful in such a scenario. For this to happen, its hardness features have to be quantitatively evaluated and shown to manifest extensively. The book undertakes a quantitative analysis of some of the major results in complexity that regard either classes of problems or individual concrete problems. The size of some important classes are studied using resource-bounded topological and measure-theoretical tools. In the case of individual problems, the book studies relevant quantitative attributes such as approximation properties or the number of hard inputs at each length. One chapter is dedicated to abs...

  5. Quantitative modeling of the ionospheric response to geomagnetic activity

    Directory of Open Access Journals (Sweden)

    T. J. Fuller-Rowell

    Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard

  6. Quantitative modeling of the ionospheric response to geomagnetic activity

    Directory of Open Access Journals (Sweden)

    T. J. Fuller-Rowell

    2000-07-01

    Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard

  7. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  8. Experimental, computational and theoretical studies of δ′ phase coarsening in Al–Li alloys

    International Nuclear Information System (INIS)

    Pletcher, B.A.; Wang, K.G.; Glicksman, M.E.

    2012-01-01

    Experimental characterization of microstructure evolution in three binary Al–Li alloys provides critical tests of both diffusion screening theory and multiparticle diffusion simulations, which predict late-stage phase-coarsening kinetics. Particle size distributions, growth kinetics and maximum particle sizes obtained using quantitative, centered dark-field transmission electron microscopy are compared quantitatively with theoretical and computational predictions. We also demonstrate the dependence on δ′ precipitate volume fraction of the rate constant for coarsening and the microstructure’s maximum particle size, both of which remained undetermined for this alloy system for nearly a half century. Our experiments show quantitatively that the diffusion-screening theoretical description of phase coarsening yields reasonable kinetic predictions, and that useful simulations of microstructure evolution are obtained via multiparticle diffusion. The tested theory and simulation method will provide useful tools for future design of two-phase alloys for elevated temperature applications.

  9. Patients’ Acceptance of Smartphone Health Technology for Chronic Disease Management: A Theoretical Model and Empirical Test

    Science.gov (United States)

    Dou, Kaili; Yu, Ping; Liu, Fang; Guan, YingPing; Li, Zhenye; Ji, Yumeng; Du, Ningkai; Lu, Xudong; Duan, Huilong

    2017-01-01

    Background Chronic disease patients often face multiple challenges from difficult comorbidities. Smartphone health technology can be used to help them manage their conditions only if they accept and use the technology. Objective The aim of this study was to develop and test a theoretical model to predict and explain the factors influencing patients’ acceptance of smartphone health technology for chronic disease management. Methods Multiple theories and factors that may influence patients’ acceptance of smartphone health technology have been reviewed. A hybrid theoretical model was built based on the technology acceptance model, dual-factor model, health belief model, and the factors identified from interviews that might influence patients’ acceptance of smartphone health technology for chronic disease management. Data were collected from patient questionnaire surveys and computer log records about 157 hypertensive patients’ actual use of a smartphone health app. The partial least square method was used to test the theoretical model. Results The model accounted for .412 of the variance in patients’ intention to adopt the smartphone health technology. Intention to use accounted for .111 of the variance in actual use and had a significant weak relationship with the latter. Perceived ease of use was affected by patients’ smartphone usage experience, relationship with doctor, and self-efficacy. Although without a significant effect on intention to use, perceived ease of use had a significant positive influence on perceived usefulness. Relationship with doctor and perceived health threat had significant positive effects on perceived usefulness, countering the negative influence of resistance to change. Perceived usefulness, perceived health threat, and resistance to change significantly predicted patients’ intentions to use the technology. Age and gender had no significant influence on patients’ acceptance of smartphone technology. The study also

  10. Quantitative computational models of molecular self-assembly in systems biology.

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-05-23

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  11. Statistical analysis of probabilistic models of software product lines with quantitative constraints

    DEFF Research Database (Denmark)

    Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...... of certain behaviour to the expected average cost of products. This is supported by a Maude implementation of QFLan, integrated with the SMT solver Z3 and the distributed statistical model checker MultiVeStA. Our approach is illustrated with a bikes product line case study....

  12. Theoretical modelling of physiologically stretched vessel in magnetisable stent assisted magnetic drug targeting application

    International Nuclear Information System (INIS)

    Mardinoglu, Adil; Cregg, P.J.; Murphy, Kieran; Curtin, Maurice; Prina-Mello, Adriele

    2011-01-01

    The magnetisable stent assisted magnetic targeted drug delivery system in a physiologically stretched vessel is considered theoretically. The changes in the mechanical behaviour of the vessel are analysed under the influence of mechanical forces generated by blood pressure. In this 2D mathematical model a ferromagnetic, coiled wire stent is implanted to aid collection of magnetic drug carrier particles in an elastic tube, which has similar mechanical properties to the blood vessel. A cyclic mechanical force is applied to the elastic tube to mimic the mechanical stress and strain of both the stent and vessel while in the body due to pulsatile blood circulation. The magnetic dipole-dipole and hydrodynamic interactions for multiple particles are included and agglomeration of particles is also modelled. The resulting collection efficiency of the mathematical model shows that the system performance can decrease by as much as 10% due to the effects of the pulsatile blood circulation. - Research highlights: →Theoretical modelling of magnetic drug targeting on a physiologically stretched stent-vessel system. →Cyclic mechanical force applied to mimic the mechanical stress and strain of both stent and vessel. →The magnetic dipole-dipole and hydrodynamic interactions for multiple particles is modelled. →Collection efficiency of the mathematical model is calculated for different physiological blood flow and magnetic field strength.

  13. Quantitative (real-time) PCR

    International Nuclear Information System (INIS)

    Denman, S.E.; McSweeney, C.S.

    2005-01-01

    Many nucleic acid-based probe and PCR assays have been developed for the detection tracking of specific microbes within the rumen ecosystem. Conventional PCR assays detect PCR products at the end stage of each PCR reaction, where exponential amplification is no longer being achieved. This approach can result in different end product (amplicon) quantities being generated. In contrast, using quantitative, or real-time PCR, quantification of the amplicon is performed not at the end of the reaction, but rather during exponential amplification, where theoretically each cycle will result in a doubling of product being created. For real-time PCR, the cycle at which fluorescence is deemed to be detectable above the background during the exponential phase is termed the cycle threshold (Ct). The Ct values obtained are then used for quantitation, which will be discussed later

  14. Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    2014-01-01

    Cyber-physical systems integrate information and communication technology with the physical elements of a system, mainly for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy...... consumption.We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....

  15. A Quantitative Reasoning Approach to Algebra Using Inquiry-Based Learning

    Directory of Open Access Journals (Sweden)

    Victor I. Piercey

    2017-07-01

    Full Text Available In this paper, I share a hybrid quantitative reasoning/algebra two-course sequence that challenges the common assumption that quantitative literacy and reasoning are less rigorous mathematics alternatives to algebra and illustrates that a quantitative reasoning framework can be used to teach traditional algebra. The presentation is made in two parts. In the first part, which is somewhat philosophical and theoretical, I explain my personal perspective of what I mean by “algebra” and “doing algebra.” I contend that algebra is a form of communication whose value is precision, which allows us to perform algebraic manipulations in the form of simplification and solving moves. A quantitative reasoning approach to traditional algebraic manipulations rests on intentional and purposeful use of simplification and solving moves within contextual situations. In part 2, I describe a 6-week instructional module intended for undergraduate business students that was delivered to students who had placed into beginning algebra. The perspective described in part 1 heavily informed the design of this module. The course materials, which involve the use of Excel in multiple authentic contexts, are built around the use of inquiry-based learning. Upon completion of this module, the percentage of students who successfully complete model problems in an assessment is in the same range as surveyed students in precalculus and calculus, approximately two “grade levels” ahead of their placement.

  16. Typological Structure of German Phraseology Outside Germany. Quantitative Parameters

    Directory of Open Access Journals (Sweden)

    О. Ya. Ostapovych

    2016-12-01

    Full Text Available The article deals with the modern theoretical concept in study of the variation of German phraseology abroad Germany. It is based on the synthesis of the theory of equal-righted pluricentrism with the new achievements of the cognitive linguistics. As a result the national state linguistic variant is considered as different from the regional, normatively non-codified and dialectal variation, a kind of cluster variant idiomatic thesaurus. The hypothesis of the structural isomorphy of the variant phraseology compared to the common German one has been empirically verified and vice versa - the hypothesis of the quantitative predominance in the Austrian phraseology of the structural model Adj+Sub under the Slavic linguistic influence has also been falsified.

  17. Model of twelve properties of a set of organic solvents with graph-theoretical and/or experimental parameters.

    Science.gov (United States)

    Pogliani, Lionello

    2010-01-30

    Twelve properties of a highly heterogeneous class of organic solvents have been modeled with a graph-theoretical molecular connectivity modified (MC) method, which allows to encode the core electrons and the hydrogen atoms. The graph-theoretical method uses the concepts of simple, general, and complete graphs, where these last types of graphs are used to encode the core electrons. The hydrogen atoms have been encoded by the aid of a graph-theoretical perturbation parameter, which contributes to the definition of the valence delta, delta(v), a key parameter in molecular connectivity studies. The model of the twelve properties done with a stepwise search algorithm is always satisfactory, and it allows to check the influence of the hydrogen content of the solvent molecules on the choice of the type of descriptor. A similar argument holds for the influence of the halogen atoms on the type of core electron representation. In some cases the molar mass, and in a minor way, special "ad hoc" parameters have been used to improve the model. A very good model of the surface tension could be obtained by the aid of five experimental parameters. A mixed model method based on experimental parameters plus molecular connectivity indices achieved, instead, to consistently improve the model quality of five properties. To underline is the importance of the boiling point temperatures as descriptors in these last two model methodologies. Copyright 2009 Wiley Periodicals, Inc.

  18. Theoretical model of two-phase drift flow on natural circulation

    International Nuclear Information System (INIS)

    Yang Xingtuan; Jiang Shengyao; Zhang Youjie

    2002-01-01

    Some expressions, such as sub-cooled boiling in the heating section, condensation near the riser inlet, flashing in the riser, and pressure balance in the steam-space, have been theoretically deduced from the physical model of 5 MW heating reactor test loop. The thermodynamics un-equilibrium etc have been considered too. A entire drift model with four equations has been formed, which can be applied to natural circulation system with low pressure and low steam quality. By means of introducing the concept of condensation layer, condensing of bubbles in the sub-cooled liquid has been formulated for the first time. The restrictive equations of the steam space pressure and liquid level have been offered. The equations can be solved by means of integral method, then by using Rung-Kutta-Verner method the final results is obtained

  19. Rolling force prediction for strip casting using theoretical model and artificial intelligence

    Institute of Scientific and Technical Information of China (English)

    CAO Guang-ming; LI Cheng-gang; ZHOU Guo-ping; LIU Zhen-yu; WU Di; WANG Guo-dong; LIU Xiang-hua

    2010-01-01

    Rolling force for strip casting of 1Cr17 ferritic stainless steel was predicted using theoretical model and artificial intelligence.Solution zone was classified into two parts by kiss point position during casting strip.Navier-Stokes equation in fluid mechanics and stream function were introduced to analyze the rheological property of liquid zone and mushy zone,and deduce the analytic equation of unit compression stress distribution.The traditional hot rolling model was still used in the solid zone.Neural networks based on feedforward training algorithm in Bayesian regularization were introduced to build model for kiss point position.The results show that calculation accuracy for verification data of 94.67% is in the range of+7.0%,which indicates that the predicting accuracy of this model is very high.

  20. How Do Trading Firms Upgrade Skills and Technology: A Theoretical Model

    Directory of Open Access Journals (Sweden)

    Mojca Lindic

    2015-12-01

    Full Text Available This paper studies the mechanisms of skill upgrading in trading firms by developing a theoretical model that relates the individual’s incentives for acquiring higher skills to the profit-maximizing behaviour of trading firms. The model shows that only the high ability individuals have incentives for acquiring higher skills, as long as they are compensated with higher wages after entering employment. Furthermore, high-productive firms have incentives for investing in higher technology, to employ high-skilled labour, and to engage in international trade. The decisions for technology dress-up and skill upgrading coincide with firm’s decisions to start importing and exporting as the latter requires higher technology and high-skilled labour. Contributions of the paper are twofold: gaining new insights by combining fragments of models on individual’s and firm’s behaviours, and broadening the content of the Melitz (2003 model by introducing importers and controlling for skilled and unskilled labour.

  1. Theoretical thermal dosimetry produced by an annular phased array system in CT-based patient models

    International Nuclear Information System (INIS)

    Paulsen, K.D.; Strohbehn, J.W.; Lynch, D.R.

    1984-01-01

    Theoretical calculations for the specific absorption rate (SAR) and the resulting temperature distributions produced by an annular phased array (APA) type system are made. The finite element numerical method is used in the formulation of both the electromagnetic (EM) and the thermal boundary value problems. A number of detailed patient models based on CT-scan data from the pelvic, visceral, and thoracic regions are generated to stimulate a variety of tumor locations and surrounding normal tissues. The SAR values from the EM solution are input into the bioheat transfer equation, and steady-rate temperature distributions are calculated for a wide variety of blood flow rates. Based on theoretical modeling, the APA shows no preferential heating of superficial over deep-seated tumors. However, in most cases satisfactory thermal profiles (therapeutic volume near 60%) are obtained in all three regions of the human trunk only for tumors with little or no blood flow. Unsatisfactory temperature patterns (therapeutic volume <50%) are found for tumors with moderate to high perfusion rates. These theoretical calculations should aid the clinician in the evaluation of the effectiveness of APA type devices in heating tumors located in the trunk region

  2. Theoretical modelling, analysis and validation of the shaft motion and dynamic forces during rotor–stator contact

    DEFF Research Database (Denmark)

    Lahriri, Said; Santos, Ilmar

    2013-01-01

    and stator. Expressions for the restoring magnetic forces are derived using Biot Savart law for uniformed magnetised bar magnets and the contact forces are derived by use of a compliant contact force model. The theoretical mathematical model is verified with experimental results, and shows good agreements...

  3. Energy transport in ASDEX in relation to theoretical and semi-empirical transport coefficients

    International Nuclear Information System (INIS)

    Gruber, O.; Wunderlich, R.; Lackner, K.; Schneider, W.

    1989-09-01

    A comparison of measurements with theoretically predicted energy transport coefficients has been done for Ohmic and NBI-heated discharges using both analysis and simulation codes. The contribution of strong electrostatic turbulence given by the η i -driven modes to the ion heat conductivity is very successful in explaining the observed response of confinement to density profile changes and is found to be even in good quantitative agreement. Regarding the electron branch, a combination of trapped electron driven turbulence and resistive ballooning modes might be a promising model to explain both the correct power and density dependence of confinement time, and the observed radial dependence of the electron heat conductivity. (orig.)

  4. Doing Quantitative Grounded Theory: A theory of trapped travel consumption

    Directory of Open Access Journals (Sweden)

    Mark S. Rosenbaum, Ph.D.

    2008-11-01

    Full Text Available All is data. Grounded theorists employ this sentence in their quest to create original theoretical frameworks. Yet researchers typically interpret the word gdatah to mean qualitative data or, more specifically, interview data collected from respondents. This is not to say that qualitative data is deficient; however, grounded theorists may be missing vast opportunities to create pioneering theories from quantitative data. Indeed, Glaser and Strauss (1967 argued that researchers would use qualitative and/or quantitative data to fashion original frameworks and related hypotheses, and Glaserfs (2008 recently published book, titledDoing Quantitative Grounded Theory, is an attempt to help researchers understand how to use quantitative data for grounded theory (GT.

  5. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    International Nuclear Information System (INIS)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO 2 and NO x emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner

  6. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    Energy Technology Data Exchange (ETDEWEB)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO{sub 2} and NO{sub x} emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner.

  7. Theoretical calculations of physico-chemical and spectroscopic properties of bioinorganic systems: current limits and perspectives.

    Science.gov (United States)

    Rokob, Tibor András; Srnec, Martin; Rulíšek, Lubomír

    2012-05-21

    In the last decade, we have witnessed substantial progress in the development of quantum chemical methodologies. Simultaneously, robust solvation models and various combined quantum and molecular mechanical (QM/MM) approaches have become an integral part of quantum chemical programs. Along with the steady growth of computer power and, more importantly, the dramatic increase of the computer performance to price ratio, this has led to a situation where computational chemistry, when exercised with the proper amount of diligence and expertise, reproduces, predicts, and complements the experimental data. In this perspective, we review some of the latest achievements in the field of theoretical (quantum) bioinorganic chemistry, concentrating mostly on accurate calculations of the spectroscopic and physico-chemical properties of open-shell bioinorganic systems by wave-function (ab initio) and DFT methods. In our opinion, the one-to-one mapping between the calculated properties and individual molecular structures represents a major advantage of quantum chemical modelling since this type of information is very difficult to obtain experimentally. Once (and only once) the physico-chemical, thermodynamic and spectroscopic properties of complex bioinorganic systems are quantitatively reproduced by theoretical calculations may we consider the outcome of theoretical modelling, such as reaction profiles and the various decompositions of the calculated parameters into individual spatial or physical contributions, to be reliable. In an ideal situation, agreement between theory and experiment may imply that the practical problem at hand, such as the reaction mechanism of the studied metalloprotein, can be considered as essentially solved.

  8. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  9. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  10. Use of Graph-Theoretic Models in Technological Preparation of Assembly Plant

    Directory of Open Access Journals (Sweden)

    Peter Franzevich Yurchik

    2015-05-01

    Full Text Available The article examines the existing ways of describing the structural and technological properties of the product in the process of building and repair. It turned out that the main body of work on the preparation process of assembling production uses graph-theoretic model of the product. It is shown that, in general, the structural integrity of many-form connections and relations on the set of components that can not be adequately described by binary structures, such as graphs, networks or trees.

  11. A quantitative theory of solid tumor growth, metabolic rate and vascularization.

    Directory of Open Access Journals (Sweden)

    Alexander B Herman

    Full Text Available The relationships between cellular, structural and dynamical properties of tumors have traditionally been studied separately. Here, we construct a quantitative, predictive theory of solid tumor growth, metabolic rate, vascularization and necrosis that integrates the relationships between these properties. To accomplish this, we develop a comprehensive theory that describes the interface and integration of the tumor vascular network and resource supply with the cardiovascular system of the host. Our theory enables a quantitative understanding of how cells, tissues, and vascular networks act together across multiple scales by building on recent theoretical advances in modeling both healthy vasculature and the detailed processes of angiogenesis and tumor growth. The theory explicitly relates tumor vascularization and growth to metabolic rate, and yields extensive predictions for tumor properties, including growth rates, metabolic rates, degree of necrosis, blood flow rates and vessel sizes. Besides these quantitative predictions, we explain how growth rates depend on capillary density and metabolic rate, and why similar tumors grow slower and occur less frequently in larger animals, shedding light on Peto's paradox. Various implications for potential therapeutic strategies and further research are discussed.

  12. Hospital nurses' wellbeing at work: a theoretical model.

    Science.gov (United States)

    Utriainen, Kati; Ala-Mursula, Leena; Kyngäs, Helvi

    2015-09-01

    To develop a theoretical model of hospital nurses' wellbeing at work. The concept of wellbeing at work is presented without an exact definition and without considering different contents. A model was developed in a deductive manner and empirical data collected from nurses (n = 233) working in a university hospital. Explorative factor analysis was used. The main concepts were: patients' experience of high-quality care; assistance and support among nurses; nurses' togetherness and cooperation; fluent practical organisation of work; challenging and meaningful work; freedom to express diverse feelings in the work community; well-conducted everyday nursing; status related to the work itself; fair and supportive leadership; opportunities for professional development; fluent communication with other professionals; and being together with other nurses in an informal way. Themes included: collegial relationships; enhancing high-quality patient care; supportive and fair leadership; challenging, meaningful and well organised work; and opportunities for professional development. Object-dependent wellbeing was supported. Managers should focus on strengthening the positive aspect of wellbeing at work, focusing on providing fluently organised work practices, fair and supportive leadership and togetherness while allowing nurses to implement their own ideas and promote the experience of meaningfulness. © 2014 John Wiley & Sons Ltd.

  13. Theoretical modelling of quantum circuit systems

    International Nuclear Information System (INIS)

    Stiffell, Peter Barry

    2002-01-01

    The work in this thesis concentrates on the interactions between circuit systems operating in the quantum regime. The main thrust of this work involves the use of a new model for investigating the way in which different components in such systems behave when coupled together. This is achieved by utilising the matrix representation of quantum mechanics, in conjunction with a number of other theoretical techniques (such as Wigner functions and entanglement entropies). With these tools in place it then becomes possible to investigate and review different quantum circuit systems. These investigations cover systems ranging from simple electromagnetic (cm) field oscillators in isolation to coupled SQUID rings in more sophisticated multi-component arrangements. Primarily, we look at the way SQUID rings couple to em fields, and how the ring-field interaction can be mediated by the choice of external flux, Φ x , applied to the SQUID ring. A lot of interest is focused on the transfer of energy between the system modes. However, we also investigate the statistical properties of the system, including squeezing, entropy and entanglement. Among the phenomena uncovered in this research we note the ability to control coupling in SQUID rings via the external flux, the capacity for entanglement between quantum circuit modes, frequency conversions of photons, flux squeezing and the existence of Schroedinger Cat states. (author)

  14. Choice of theoretical model for beam scattering at accelerator output foil for particle energy determination

    International Nuclear Information System (INIS)

    Balagyra, V.S.; Ryabka, P.M.

    1999-01-01

    For measuring the charged particle energy calculations of mean square angles of electron beam multiple Coulomb scattering at output combined accelerator target were undertaken according to seven theoretical models. Mollier method showed the best agreement with experiments

  15. Developing a theoretical framework for complex community-based interventions.

    Science.gov (United States)

    Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana

    2014-01-01

    Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.

  16. Development of a theoretical model for measuring the perceived value of social responsibility of IPEN

    International Nuclear Information System (INIS)

    Mutarelli, Rita de Cassia; Lima, Ana Cecilia de Souza; Sabundjian, Gaiane

    2015-01-01

    Social responsibility has been one of the great discussions in institutional management, and that is an important variable in the strategy and performance of the institutions. The Instituto de Pesquisas Energeticas e Nucleares (IPEN) has worked for the development of environmental and social issues, converging mainly to the benefit of the population. The theory that guides the social responsibility practices is always difficult to measure for several reasons. One reason for this difficulty is that social responsibility involves a variety of issues that are converted in rights, obligations and expectations of different audiences that could be internal and external to the organization. In addition, the different understanding of the institutions about social and environmental issues is another source of complexity. Based on the study context including: the topic being researched, the chosen institute and the questions resulting from the research, the aim of this paper is to propose a theoretical model to describe and analyze the social responsibility of IPEN. The main contribution of this study is to develop a model that integrates the dimensions of social responsibility. These dimensions - also called constructs - are composed of indexes and indicators that were previously used in various contexts of empirical research, combined with the theoretical and conceptual review of social responsibility. The construction of the proposed theoretical model was based on the research of various methodologies and various indicators for measuring social responsibility. This model was statistically tested, analyzed, adjusted, and the end result is a consistent model to measure the perceived value of social responsibility of IPEN. This work could also be applied to other institutions. Moreover, it may be improved and become a tool that will serve as a thermometer to measure social and environmental issues, and will support decision making in various management processes. (author)

  17. Development of a theoretical model for measuring the perceived value of social responsibility of IPEN

    Energy Technology Data Exchange (ETDEWEB)

    Mutarelli, Rita de Cassia; Lima, Ana Cecilia de Souza; Sabundjian, Gaiane, E-mail: rmutarelli@gmail.com, E-mail: aclima@ipen.br, E-mail: gdjian@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Social responsibility has been one of the great discussions in institutional management, and that is an important variable in the strategy and performance of the institutions. The Instituto de Pesquisas Energeticas e Nucleares (IPEN) has worked for the development of environmental and social issues, converging mainly to the benefit of the population. The theory that guides the social responsibility practices is always difficult to measure for several reasons. One reason for this difficulty is that social responsibility involves a variety of issues that are converted in rights, obligations and expectations of different audiences that could be internal and external to the organization. In addition, the different understanding of the institutions about social and environmental issues is another source of complexity. Based on the study context including: the topic being researched, the chosen institute and the questions resulting from the research, the aim of this paper is to propose a theoretical model to describe and analyze the social responsibility of IPEN. The main contribution of this study is to develop a model that integrates the dimensions of social responsibility. These dimensions - also called constructs - are composed of indexes and indicators that were previously used in various contexts of empirical research, combined with the theoretical and conceptual review of social responsibility. The construction of the proposed theoretical model was based on the research of various methodologies and various indicators for measuring social responsibility. This model was statistically tested, analyzed, adjusted, and the end result is a consistent model to measure the perceived value of social responsibility of IPEN. This work could also be applied to other institutions. Moreover, it may be improved and become a tool that will serve as a thermometer to measure social and environmental issues, and will support decision making in various management processes. (author)

  18. Studying Economic Space: Synthesis of Balance and Game-Theoretic Methods of Modelling

    Directory of Open Access Journals (Sweden)

    Natalia Gennadyevna Zakharchenko

    2015-12-01

    Full Text Available The article introduces questions about development of models used to study economic space. The author proposes the model that combines balance and game-theoretic methods for estimating system effects of economic agents’ interactions in multi-level economic space. The model is applied to research interactions between economic agents that are spatially heterogeneous within the Russian Far East. In the model the economic space of region is considered in a territorial dimension (the first level of decomposing space and also in territorial and product dimensions (the second level of decomposing space. The paper shows the mechanism of system effects formation that exists in the economic space of region. The author estimates system effects, analyses the real allocation of these effects between economic agents and identifies three types of local industrial markets: with zero, positive and negative system effects

  19. Satellite, climatological, and theoretical inputs for modeling of the diurnal cycle of fire emissions

    Science.gov (United States)

    Hyer, E. J.; Reid, J. S.; Schmidt, C. C.; Giglio, L.; Prins, E.

    2009-12-01

    The diurnal cycle of fire activity is crucial for accurate simulation of atmospheric effects of fire emissions, especially at finer spatial and temporal scales. Estimating diurnal variability in emissions is also a critical problem for construction of emissions estimates from multiple sensors with variable coverage patterns. An optimal diurnal emissions estimate will use as much information as possible from satellite fire observations, compensate known biases in those observations, and use detailed theoretical models of the diurnal cycle to fill in missing information. As part of ongoing improvements to the Fire Location and Monitoring of Burning Emissions (FLAMBE) fire monitoring system, we evaluated several different methods of integrating observations with different temporal sampling. We used geostationary fire detections from WF_ABBA, fire detection data from MODIS, empirical diurnal cycles from TRMM, and simple theoretical diurnal curves based on surface heating. Our experiments integrated these data in different combinations to estimate the diurnal cycles of emissions for each location and time. Hourly emissions estimates derived using these methods were tested using an aerosol transport model. We present results of this comparison, and discuss the implications of our results for the broader problem of multi-sensor data fusion in fire emissions modeling.

  20. Patients' Acceptance of Smartphone Health Technology for Chronic Disease Management: A Theoretical Model and Empirical Test.

    Science.gov (United States)

    Dou, Kaili; Yu, Ping; Deng, Ning; Liu, Fang; Guan, YingPing; Li, Zhenye; Ji, Yumeng; Du, Ningkai; Lu, Xudong; Duan, Huilong

    2017-12-06

    Chronic disease patients often face multiple challenges from difficult comorbidities. Smartphone health technology can be used to help them manage their conditions only if they accept and use the technology. The aim of this study was to develop and test a theoretical model to predict and explain the factors influencing patients' acceptance of smartphone health technology for chronic disease management. Multiple theories and factors that may influence patients' acceptance of smartphone health technology have been reviewed. A hybrid theoretical model was built based on the technology acceptance model, dual-factor model, health belief model, and the factors identified from interviews that might influence patients' acceptance of smartphone health technology for chronic disease management. Data were collected from patient questionnaire surveys and computer log records about 157 hypertensive patients' actual use of a smartphone health app. The partial least square method was used to test the theoretical model. The model accounted for .412 of the variance in patients' intention to adopt the smartphone health technology. Intention to use accounted for .111 of the variance in actual use and had a significant weak relationship with the latter. Perceived ease of use was affected by patients' smartphone usage experience, relationship with doctor, and self-efficacy. Although without a significant effect on intention to use, perceived ease of use had a significant positive influence on perceived usefulness. Relationship with doctor and perceived health threat had significant positive effects on perceived usefulness, countering the negative influence of resistance to change. Perceived usefulness, perceived health threat, and resistance to change significantly predicted patients' intentions to use the technology. Age and gender had no significant influence on patients' acceptance of smartphone technology. The study also confirmed the positive relationship between intention to use

  1. Experimental and theoretical requirements for fuel modelling

    International Nuclear Information System (INIS)

    Gatesoupe, J.P.

    1979-01-01

    From a scientific point of view it may be considered that any event in the life of a fuel pin under irradiation should be perfectly well understood and foreseen from that deterministic point of view, the whole behaviour of the pin maybe analysed and dismantled with a specific function for every component part and each component part related to one basic phenomenon which can be independently studied on pure physical grounds. When extracted from the code structure the subroutine is studied for itself by specialists who try to keep as close as possible to the physics involved in the phenomenon; that often leads to an impressive luxury in details and a subsequent need for many unavailable input data. It might seem more secure to follow that approach since it tries to be firmly based on theoretical grounds. One should think so if the phenomenological situation in the pin were less complex than it is. The codes would not be adequate for off-normal operating conditions since for the accidental transient conditions the key-phenomena would not be the same as for steady-state or slow transient conditions. The orientation given to fuel modelling is based on our two main technological constraints which are: no fuel melting; no cladding failure; no excessive cladding deformation. In this context, the only relevant models are those which have a significant influence on the maximum temperatures in the fuel or on the cladding damage hence the selection between key models and irrelevant models which will next be done. A rather pragmatic view is kept on codification with a special focus on a few determinant aspects of fuel behaviour and no attention to models which are nothing but decorative. Fuel modeling is merely considered as a link between experimental knowledge; it serves as a guide for further improvements in fuel design and as so happens to be quite useful. On this basis the main lacks in of fuel behaviour is described. These are mainly concerning: thermal transfer through

  2. Theoretical model of Orion gamma emission: acceleration, propagation and interaction of energetic particles in the interstellar medium

    International Nuclear Information System (INIS)

    Parizot, Etienne

    1997-01-01

    This research thesis reports the development of a general model for the study of the propagation and interaction of energetic particles (cosmic rays, and so on) in the interstellar medium (ISM). The first part addresses the development of theoretical and numerical tools. The author presents cosmic rays and energetic particles, presents and describes the various processes related to high-energy particles (matter ionisation, synchrotron and Bremsstrahlung radiation, Compton scattering, nuclear processes), addresses the transport and acceleration of energetic particles (plasmas, magnetic fields and energetic particles, elements of kinetic theory, transport and acceleration of energetic particles), and describes the general model of production of γ nuclear lines and of secondary nuclei. The second part addresses the gamma signature of a massive star in a dense medium: presentation and description of massive stars and of the circumstellar medium, life, death and gamma resurrection of a massive star at the heart of a cloud. The third part addresses the case of the gamma emission by Orion, and more particularly presents a theoretical model of this emission. Some generalities and perspectives (theoretical as well as observational) are then stated [fr

  3. Quantitative interpretations of Visible-NIR reflectance spectra of blood.

    Science.gov (United States)

    Serebrennikova, Yulia M; Smith, Jennifer M; Huffman, Debra E; Leparc, German F; García-Rubio, Luis H

    2008-10-27

    This paper illustrates the implementation of a new theoretical model for rapid quantitative analysis of the Vis-NIR diffuse reflectance spectra of blood cultures. This new model is based on the photon diffusion theory and Mie scattering theory that have been formulated to account for multiple scattering populations and absorptive components. This study stresses the significance of the thorough solution of the scattering and absorption problem in order to accurately resolve for optically relevant parameters of blood culture components. With advantages of being calibration-free and computationally fast, the new model has two basic requirements. First, wavelength-dependent refractive indices of the basic chemical constituents of blood culture components are needed. Second, multi-wavelength measurements or at least the measurements of characteristic wavelengths equal to the degrees of freedom, i.e. number of optically relevant parameters, of blood culture system are required. The blood culture analysis model was tested with a large number of diffuse reflectance spectra of blood culture samples characterized by an extensive range of the relevant parameters.

  4. Analysis of genetic effects of nuclear-cytoplasmic interaction on quantitative traits: genetic model for diploid plants.

    Science.gov (United States)

    Han, Lide; Yang, Jian; Zhu, Jun

    2007-06-01

    A genetic model was proposed for simultaneously analyzing genetic effects of nuclear, cytoplasm, and nuclear-cytoplasmic interaction (NCI) as well as their genotype by environment (GE) interaction for quantitative traits of diploid plants. In the model, the NCI effects were further partitioned into additive and dominance nuclear-cytoplasmic interaction components. Mixed linear model approaches were used for statistical analysis. On the basis of diallel cross designs, Monte Carlo simulations showed that the genetic model was robust for estimating variance components under several situations without specific effects. Random genetic effects were predicted by an adjusted unbiased prediction (AUP) method. Data on four quantitative traits (boll number, lint percentage, fiber length, and micronaire) in Upland cotton (Gossypium hirsutum L.) were analyzed as a worked example to show the effectiveness of the model.

  5. Theoretical cytotoxicity models for combined exposure of cells to different radiations

    International Nuclear Information System (INIS)

    Scott, B.R.

    1981-01-01

    Theoretical cytotoxicity models for predicting cell survival after sequential or simultaneous exposure of cells to high and low linear energy transfer (LET) radiation are discussed. Major findings are that (1) ordering of sequential exposures can influence the level of cell killing achieved; (2) synergism is unimportant at low doses; (3) effects at very low doses should be additive; (4) use of the conventional relative biological effectiveness approach for predicting combined effects of different radiations is unnecessary at very low doses and can lead to overestimation of risk at moderate and high doses

  6. A theoretical model to predict customer satisfaction in relation to service quality in selected university libraries in Sri Lanka

    Directory of Open Access Journals (Sweden)

    Chaminda Jayasundara

    2009-01-01

    Full Text Available University library administrators in Sri Lanka have begun to search for alternative ways to satisfy their clientele on the basis of service quality. This article aims at providing a theoretical model to facilitate the identification of service quality attributes and domains that may be used to predict customer satisfaction from a service quality perspective. The effectiveness of existing service quality models such as LibQUAL, SERVQUAL and SERVPREF have been questioned. In that regard, this study developed a theoretical model for academic libraries in Sri Lanka based on the disconfirmation and performance-only paradigms. These perspectives were considered by researchers to be the core mechanism to develop service quality/customer satisfaction models. The attributes and domain identification of service quality was carried out with a stratified sample of 263 participants selected from postgraduate and undergraduate students and academic staff members from the faculties of Arts in four universities in Sri Lanka. The study established that responsiveness, supportiveness, building environment, collection and access, furniture and facilities, technology, Web services and service delivery were quality domains which can be used to predict customer satisfaction. The theoretical model is unique in its domain structure compared to the existing models. The model needs to be statistically tested to make it valid and parsimonious.

  7. A general nonlinear magnetomechanical model for ferromagnetic materials under a constant weak magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Pengpeng; Zheng, Xiaojing, E-mail: xjzheng@xidian.edu.cn [School of Mechano-Electronic Engineering, Xidian University, Xi' an 710071, Shaanxi (China); Jin, Ke [School of Aerospace Science and Technology, Xidian University, Xi' an 710071, Shaanxi (China)

    2016-04-14

    Weak magnetic nondestructive testing (e.g., metal magnetic memory method) concerns the magnetization variation of ferromagnetic materials due to its applied load and a weak magnetic surrounding them. One key issue on these nondestructive technologies is the magnetomechanical effect for quantitative evaluation of magnetization state from stress–strain condition. A representative phenomenological model has been proposed to explain the magnetomechanical effect by Jiles in 1995. However, the Jiles' model has some deficiencies in quantification, for instance, there is a visible difference between theoretical prediction and experimental measurements on stress–magnetization curve, especially in the compression case. Based on the thermodynamic relations and the approach law of irreversible magnetization, a nonlinear coupled model is proposed to improve the quantitative evaluation of the magnetomechanical effect. Excellent agreement has been achieved between the predictions from the present model and previous experimental results. In comparison with Jiles' model, the prediction accuracy is improved greatly by the present model, particularly for the compression case. A detailed study has also been performed to reveal the effects of initial magnetization status, cyclic loading, and demagnetization factor on the magnetomechanical effect. Our theoretical model reveals that the stable weak magnetic signals of nondestructive testing after multiple cyclic loads are attributed to the first few cycles eliminating most of the irreversible magnetization. Remarkably, the existence of demagnetization field can weaken magnetomechanical effect, therefore, significantly reduces the testing capability. This theoretical model can be adopted to quantitatively analyze magnetic memory signals, and then can be applied in weak magnetic nondestructive testing.

  8. Theoretical studies in elementary particle physics

    International Nuclear Information System (INIS)

    Collins, J.

    1994-01-01

    This is a report on research conducted at Penn State University under grant number DE-FG02-90ER-40577, from November 1992 to present. The author is a member of the CTEQ collaboration (Coordinated Theoretical and Experimental Project on Quantitative QCD). Some of the work in CTEQ is described in this report. Topics which the authors work has touched include: polarized hard scattering; hard diffraction; small x and perturbative pomeron physics; gauge-invariant operators; fundamental QCD; heavy quarks; instantons and deep inelastic scattering; non-perturbative corrections to τ decay

  9. Interactive 3D visualization for theoretical virtual observatories

    Science.gov (United States)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  10. Interactive 3D Visualization for Theoretical Virtual Observatories

    Science.gov (United States)

    Dykes, Tim; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-04-01

    Virtual Observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of datasets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2d or volume rendering in 3d. We analyze the current state of 3d visualization for big theoretical astronomical datasets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3d visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based datasets allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  11. Quantitative multi-waves migration in elastic anisotropic media; Migration quantitative multi-ondes en milieu elastique anisotrope

    Energy Technology Data Exchange (ETDEWEB)

    Borgne, H.

    2004-12-01

    Seismic imaging is an important tool for ail exploration. From the filtered seismic traces and a subsurface velocity model, migration allows to localize the reflectors and to estimate physical properties of these interfaces. The subsurface is split up into a reference medium, corresponding to the low spatial frequencies (a smooth medium), and a perturbation medium, corresponding to the high spatial frequencies. The propagation of elastic waves in the medium of reference is modelled by the ray theory. The association of this theory with a principle of diffraction or reflection allows to take into account the high spatial frequencies: the Kirchhoff approach represents so the medium of perturbations with continuous surfaces, characterized by reflection coefficients. The target of the quantitative migration is to reconstruct this reflection coefficient, notably its behaviour according to the incidence angle. These information will open the way to seismic characterization of the reservoir domain, with. a stratigraphic inversion for instance. In order to improve the qualitative and quantitative migration results, one of the current challenges is to take into account the anisotropy of the subsurface. Taking into account rocks anisotropy in the imaging process of seismic data requires two improvements from the isotropic case. The first one roughly concerns the modelling aspect: an anisotropic propagator should be used to avoid a mis-positioning or bad focusing of the imaged reflectors. The second correction concerns the migration aspect: as anisotropy affects the reflectivity of subsurface, a specific anisotropic imaging formula should be applied in the migration kernel, in order to recover the correct A V A behavior of the subsurface reflectors, If the first correction is DOW made in most so-called anisotropic imaging algorithms, the second one is currently ignored. The first part of my work concerns theoretical aspects. 1 study first the preservation of amplitudes in the

  12. A suite of models to support the quantitative assessment of spread in pest risk analysis

    NARCIS (Netherlands)

    Robinet, C.; Kehlenbeck, H.; Werf, van der W.

    2012-01-01

    In the frame of the EU project PRATIQUE (KBBE-2007-212459 Enhancements of pest risk analysis techniques) a suite of models was developed to support the quantitative assessment of spread in pest risk analysis. This dataset contains the model codes (R language) for the four models in the suite. Three

  13. Quantitative stress measurement of elastic deformation using mechanoluminescent sensor: An intensity ratio model

    Science.gov (United States)

    Cai, Tao; Guo, Songtao; Li, Yongzeng; Peng, Di; Zhao, Xiaofeng; Liu, Yingzheng

    2018-04-01

    The mechanoluminescent (ML) sensor is a newly developed non-invasive technique for stress/strain measurement. However, its application has been mostly restricted to qualitative measurement due to the lack of a well-defined relationship between ML intensity and stress. To achieve accurate stress measurement, an intensity ratio model was proposed in this study to establish a quantitative relationship between the stress condition and its ML intensity in elastic deformation. To verify the proposed model, experiments were carried out on a ML measurement system using resin samples mixed with the sensor material SrAl2O4:Eu2+, Dy3+. The ML intensity ratio was found to be dependent on the applied stress and strain rate, and the relationship acquired from the experimental results agreed well with the proposed model. The current study provided a physical explanation for the relationship between ML intensity and its stress condition. The proposed model was applicable in various SrAl2O4:Eu2+, Dy3+-based ML measurement in elastic deformation, and could provide a useful reference for quantitative stress measurement using the ML sensor in general.

  14. Theoretical study of the aluminum melting curve to very high pressure

    International Nuclear Information System (INIS)

    Moriarty, J.A.; Young, D.A.; Ross, M.

    1984-01-01

    A detailed theoretical study of the Al melting curve from normal melting conditions to pressures in the vicinity of 2 Mbar is presented. The analysis is based on two parallel, but distinct, treatments of the metal: the first from rigorous generalized pseudopotential theory involving first-principles nonlocal pseudopotentials and the second from a parametrized local pseudopotential model which has been accurately fit to first-principles band-theory and experimental equation-of-state data. Both treatments utilize full lattice-dynamical calculations of the phonon free energy in the solid, within the harmonic approximation, and fluid variational theory to obtain the free energy of the liquid. Particular attention is focused on the choice of the reference system in implementing the fluid variational theory. It is shown that in Al the soft-sphere model of Ross produces a lower (and hence more accurate) liquid free energy than either the hard-sphere or one-component-plasma reference systems, and is, moreover, necessary to obtain a reasonable quantitative description of the melting properties. With the soft-sphere system, the two theoretical treatments give results in good overall agreement with each other and with experiment. In particular, melting on the shock Hugoniot is predicted to begin at about 1.2 Mbar and to end at about 1.55 Mbar, in excellent agreement with the recent preliminary measurements of McQueen

  15. Quantitative Modeling of Acid Wormholing in Carbonates- What Are the Gaps to Bridge

    KAUST Repository

    Qiu, Xiangdong; Zhao, Weishu; Chang, Frank; Dyer, Steve

    2013-01-01

    Conventional wormhole propagation models largely ignore the impact of reaction products. When implemented in a job design, the significant errors can result in treatment fluid schedule, rate, and volume. A more accurate method to simulate carbonate matrix acid treatments would accomodate the effect of reaction products on reaction kinetics. It is the purpose of this work to properly account for these effects. This is an important step in achieving quantitative predictability of wormhole penetration during an acidzing treatment. This paper describes the laboratory procedures taken to obtain the reaction-product impacted kinetics at downhole conditions using a rotating disk apparatus, and how this new set of kinetics data was implemented in a 3D wormholing model to predict wormhole morphology and penetration velocity. The model explains some of the differences in wormhole morphology observed in limestone core flow experiments where injection pressure impacts the mass transfer of hydrogen ions to the rock surface. The model uses a CT scan rendered porosity field to capture the finer details of the rock fabric and then simulates the fluid flow through the rock coupled with reactions. Such a validated model can serve as a base to scale up to near wellbore reservoir and 3D radial flow geometry allowing a more quantitative acid treatment design.

  16. A quantitative model for estimating mean annual soil loss in cultivated land using 137Cs measurements

    International Nuclear Information System (INIS)

    Yang Hao; Zhao Qiguo; Du Mingyuan; Minami, Katsuyuki; Hatta, Tamao

    2000-01-01

    The radioisotope 137 Cs has been widely used to determine rates of cultivated soil loss, Many calibration relationships (including both empirical relationships and theoretical models) have been employed to estimate erosion rates from the amount of 137 Cs lost from the cultivated soil profile. However, there are important limitations which restrict the reliability of these models, which consider only the uniform distribution of 137 Cs in the plough layer and the depth. As a result, erosion rates they may be overestimated or underestimated. This article presents a quantitative model for the relation the amount of 137 Cs lost from the cultivate soil profile and the rate of soil erosion. According to a mass balance model, during the construction of this model we considered the following parameters: the remaining fraction of the surface enrichment layer (F R ), the thickness of the surface enrichment layer (H s ), the depth of the plough layer (H p ), input fraction of the total 137 Cs fallout deposition during a given year t (F t ), radioactive decay of 137 Cs (k), and sampling year (t). The simulation results showed that the amounts of erosion rates estimated using this model were very sensitive to changes in the values of the parameters F R , H s , and H p . We also observed that the relationship between the rate of soil loss and 137 Cs depletion is neither linear nor logarithmic, and is very complex. Although the model is an improvement over existing approaches to derive calibration relationships for cultivated soil, it requires empirical information on local soil properties and the behavior of 137 Cs in the soil profile. There is clearly still a need for more precise information on the latter aspect and, in particular, on the retention of 137 Cs fallout in the top few millimeters of the soil profile and on the enrichment and depletion effects associated with soil redistribution (i.e. for determining accurate values of F R and H s ). (author)

  17. A theoretical model for predicting the Peak Cutting Force of conical picks

    Directory of Open Access Journals (Sweden)

    Gao Kuidong

    2014-01-01

    Full Text Available In order to predict the PCF (Peak Cutting Force of conical pick in rock cutting process, a theoretical model is established based on elastic fracture mechanics theory. The vertical fracture model of rock cutting fragment is also established based on the maximum tensile criterion. The relation between vertical fracture angle and associated parameters (cutting parameter  and ratio B of rock compressive strength to tensile strength is obtained by numerical analysis method and polynomial regression method, and the correctness of rock vertical fracture model is verified through experiments. Linear regression coefficient between the PCF of prediction and experiments is 0.81, and significance level less than 0.05 shows that the model for predicting the PCF is correct and reliable. A comparative analysis between the PCF obtained from this model and Evans model reveals that the result of this prediction model is more reliable and accurate. The results of this work could provide some guidance for studying the rock cutting theory of conical pick and designing the cutting mechanism.

  18. Experimental and theoretical investigation of the first-order hyperpolarizability of a class of triarylamine derivatives

    International Nuclear Information System (INIS)

    Silva, Daniel L.; Fonseca, Ruben D.; Mendonca, Cleber R.; De Boni, Leonardo; Vivas, Marcelo G.; Ishow, E.; Canuto, Sylvio

    2015-01-01

    This paper reports on the static and dynamic first-order hyperpolarizabilities of a class of push-pull octupolar triarylamine derivatives dissolved in toluene. We have combined hyper-Rayleigh scattering experiment and the coupled perturbed Hartree-Fock method implemented at the Density Functional Theory (DFT) level of theory to determine the static and dynamic (at 1064 nm) first-order hyperpolarizability (β HRS ) of nine triarylamine derivatives with distinct electron-withdrawing groups. In four of these derivatives, an azoaromatic unit is inserted and a pronounceable increase of the first-order hyperpolarizability is reported. Based on the theoretical results, the dipolar/octupolar character of the derivatives is determined. By using a polarizable continuum model in combination with the DFT calculations, it was found that although solvated in an aprotic and low dielectric constant solvent, due to solvent-induced polarization and the frequency dispersion effect, the environment substantially affects the first-order hyperpolarizability of all derivatives investigated. This statement is supported due to the solvent effects to be essential for the better agreement between theoretical results and experimental data concerning the dynamic first-order hyperpolarizability of the derivatives. The first-order hyperpolarizability of the derivatives was also modeled using the two- and three-level models, where the relationship between static and dynamic first hyperpolarizabilities is given by a frequency dispersion model. Using this approach, it was verified that the dynamic first hyperpolarizability of the derivatives is satisfactorily reproduced by the two-level model and that, in the case of the derivatives with an azoaromatic unit, the use of a damped few-level model is essential for, considering also the molecular size of such derivatives, a good quantitative agreement between theoretical results and experimental data to be observed

  19. RECENT DEVELOPMENTS OF THE FINANCIAL REPORTING MODEL: THEORETICAL STUDIES IN REVIEW

    Directory of Open Access Journals (Sweden)

    Bonaci Carmen Giorgiana

    2011-07-01

    Full Text Available Our paper analyzes the manner in which the financial reporting model evolved towards fair value accounting. After a brief introduction into the context of financial reporting at international level, the analysis focuses on the accounting model of fair value. This is done by synthesizing main studies in accounting research literature that analyze fair value accounting through a theoretical approach. The analysis being developed relies on literature review methodology. The main purpose of the developed analysis is to synthesize main pros and cons as being documented through accounting research literature. Our findings underline both the advantages and shortcomings of fair value accounting and of the recent mixed attribute in nowadays financial reporting practices. The concluding remarks synthesize the obtained results and possible future developments of our analysis.

  20. Theoretical size distribution of fossil taxa: analysis of a null model

    Directory of Open Access Journals (Sweden)

    Hughes Barry D

    2007-03-01

    Full Text Available Abstract Background This article deals with the theoretical size distribution (of number of sub-taxa of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.

  1. Theoretical chemistry advances and perspectives

    CERN Document Server

    Eyring, Henry

    1980-01-01

    Theoretical Chemistry: Advances and Perspectives, Volume 5 covers articles concerning all aspects of theoretical chemistry. The book discusses the mean spherical approximation for simple electrolyte solutions; the representation of lattice sums as Mellin-transformed products of theta functions; and the evaluation of two-dimensional lattice sums by number theoretic means. The text also describes an application of contour integration; a lattice model of quantum fluid; as well as the computational aspects of chemical equilibrium in complex systems. Chemists and physicists will find the book usef

  2. Theoretical Modelling Methods for Thermal Management of Batteries

    Directory of Open Access Journals (Sweden)

    Bahman Shabani

    2015-09-01

    Full Text Available The main challenge associated with renewable energy generation is the intermittency of the renewable source of power. Because of this, back-up generation sources fuelled by fossil fuels are required. In stationary applications whether it is a back-up diesel generator or connection to the grid, these systems are yet to be truly emissions-free. One solution to the problem is the utilisation of electrochemical energy storage systems (ESS to store the excess renewable energy and then reusing this energy when the renewable energy source is insufficient to meet the demand. The performance of an ESS amongst other things is affected by the design, materials used and the operating temperature of the system. The operating temperature is critical since operating an ESS at low ambient temperatures affects its capacity and charge acceptance while operating the ESS at high ambient temperatures affects its lifetime and suggests safety risks. Safety risks are magnified in renewable energy storage applications given the scale of the ESS required to meet the energy demand. This necessity has propelled significant effort to model the thermal behaviour of ESS. Understanding and modelling the thermal behaviour of these systems is a crucial consideration before designing an efficient thermal management system that would operate safely and extend the lifetime of the ESS. This is vital in order to eliminate intermittency and add value to renewable sources of power. This paper concentrates on reviewing theoretical approaches used to simulate the operating temperatures of ESS and the subsequent endeavours of modelling thermal management systems for these systems. The intent of this review is to present some of the different methods of modelling the thermal behaviour of ESS highlighting the advantages and disadvantages of each approach.

  3. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    Science.gov (United States)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  4. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van

    1999-01-01

    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  5. AI/OR computational model for integrating qualitative and quantitative design methods

    Science.gov (United States)

    Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor

    1990-01-01

    A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.

  6. A non-traditional fluid problem: transition between theoretical models from Stokes’ to turbulent flow

    Science.gov (United States)

    Salomone, Horacio D.; Olivieri, Néstor A.; Véliz, Maximiliano E.; Raviola, Lisandro A.

    2018-05-01

    In the context of fluid mechanics courses, it is customary to consider the problem of a sphere falling under the action of gravity inside a viscous fluid. Under suitable assumptions, this phenomenon can be modelled using Stokes’ law and is routinely reproduced in teaching laboratories to determine terminal velocities and fluid viscosities. In many cases, however, the measured physical quantities show important deviations with respect to the predictions deduced from the simple Stokes’ model, and the causes of these apparent ‘anomalies’ (for example, whether the flow is laminar or turbulent) are seldom discussed in the classroom. On the other hand, there are various variable-mass problems that students tackle during elementary mechanics courses and which are discussed in many textbooks. In this work, we combine both kinds of problems and analyse—both theoretically and experimentally—the evolution of a system composed of a sphere pulled by a chain of variable length inside a tube filled with water. We investigate the effects of different forces acting on the system such as weight, buoyancy, viscous friction and drag force. By means of a sequence of mathematical models of increasing complexity, we obtain a progressive fit that accounts for the experimental data. The contrast between the various models exposes the strengths and weaknessess of each one. The proposed experience can be useful for integrating concepts of elementary mechanics and fluids, and is suitable as laboratory practice, stressing the importance of the experimental validation of theoretical models and showing the model-building processes in a didactic framework.

  7. Theoretical vibro-acoustic modeling of acoustic noise transmission through aircraft windows

    Science.gov (United States)

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2016-06-01

    In this paper, a fully vibro-acoustic model for sound transmission across a multi-pane aircraft window is developed. The proposed model is efficiently applied for a set of window models to perform extensive theoretical parametric studies. The studied window configurations generally simulate the passenger window designs of modern aircraft classes which have an exterior multi-Plexiglas pane, an interior single acrylic glass pane and a dimmable glass ("smart" glass), all separated by thin air cavities. The sound transmission loss (STL) characteristics of three different models, triple-, quadruple- and quintuple-paned windows identical in size and surface density, are analyzed for improving the acoustic insulation performances. Typical results describing the influence of several system parameters, such as the thicknesses, number and spacing of the window panes, on the transmission loss are then investigated. In addition, a comparison study is carried out to evaluate the acoustic reduction capability of each window model. The STL results show that the higher frequencies sound transmission loss performance can be improved by increasing the number of window panels, however, the low frequency performance is decreased, particularly at the mass-spring resonances.

  8. Strategy for a numerical Rock Mechanics Site Descriptive Model. Further development of the theoretical/numerical approach

    International Nuclear Information System (INIS)

    Olofsson, Isabelle; Fredriksson, Anders

    2005-05-01

    The Swedish Nuclear and Fuel Management Company (SKB) is conducting Preliminary Site Investigations at two different locations in Sweden in order to study the possibility of a Deep Repository for spent fuel. In the frame of these Site Investigations, Site Descriptive Models are achieved. These products are the result of an interaction of several disciplines such as geology, hydrogeology, and meteorology. The Rock Mechanics Site Descriptive Model constitutes one of these models. Before the start of the Site Investigations a numerical method using Discrete Fracture Network (DFN) models and the 2D numerical software UDEC was developed. Numerical simulations were the tool chosen for applying the theoretical approach for characterising the mechanical rock mass properties. Some shortcomings were identified when developing the methodology. Their impacts on the modelling (in term of time and quality assurance of results) were estimated to be so important that the improvement of the methodology with another numerical tool was investigated. The theoretical approach is still based on DFN models but the numerical software used is 3DEC. The main assets of the programme compared to UDEC are an optimised algorithm for the generation of fractures in the model and for the assignment of mechanical fracture properties. Due to some numerical constraints the test conditions were set-up in order to simulate 2D plane strain tests. Numerical simulations were conducted on the same data set as used previously for the UDEC modelling in order to estimate and validate the results from the new methodology. A real 3D simulation was also conducted in order to assess the effect of the '2D' conditions in the 3DEC model. Based on the quality of the results it was decided to update the theoretical model and introduce the new methodology based on DFN models and 3DEC simulations for the establishment of the Rock Mechanics Site Descriptive Model. By separating the spatial variability into two parts, one

  9. Effects of pump recycling technique on stimulated Brillouin scattering threshold: a theoretical model.

    Science.gov (United States)

    Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A

    2010-10-11

    We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave.

  10. Variable selection in near infrared spectroscopy for quantitative models of homologous analogs of cephalosporins

    Directory of Open Access Journals (Sweden)

    Yan-Chun Feng

    2014-07-01

    Full Text Available Two universal spectral ranges (4550–4100 cm-1 and 6190–5510 cm-1 for construction of quantitative models of homologous analogs of cephalosporins were proposed by evaluating the performance of five spectral ranges and their combinations, using three data sets of cephalosporins for injection, i.e., cefuroxime sodium, ceftriaxone sodium and cefoperazone sodium. Subsequently, the proposed ranges were validated by using eight calibration sets of other homologous analogs of cephalosporins for injection, namely cefmenoxime hydrochloride, ceftezole sodium, cefmetazole, cefoxitin sodium, cefotaxime sodium, cefradine, cephazolin sodium and ceftizoxime sodium. All the constructed quantitative models for the eight kinds of cephalosporins using these universal ranges could fulfill the requirements for quick quantification. After that, competitive adaptive reweighted sampling (CARS algorithm and infrared (IR–near infrared (NIR two-dimensional (2D correlation spectral analysis were used to determine the scientific basis of these two spectral ranges as the universal regions for the construction of quantitative models of cephalosporins. The CARS algorithm demonstrated that the ranges of 4550–4100 cm-1 and 6190–5510 cm-1 included some key wavenumbers which could be attributed to content changes of cephalosporins. The IR–NIR 2D spectral analysis showed that certain wavenumbers in these two regions have strong correlations to the structures of those cephalosporins that were easy to degrade.

  11. Refining the statistical model for quantitative immunostaining of surface-functionalized nanoparticles by AFM.

    Science.gov (United States)

    MacCuspie, Robert I; Gorka, Danielle E

    2013-10-01

    Recently, an atomic force microscopy (AFM)-based approach for quantifying the number of biological molecules conjugated to a nanoparticle surface at low number densities was reported. The number of target molecules conjugated to the analyte nanoparticle can be determined with single nanoparticle fidelity using antibody-mediated self-assembly to decorate the analyte nanoparticles with probe nanoparticles (i.e., quantitative immunostaining). This work refines the statistical models used to quantitatively interpret the observations when AFM is used to image the resulting structures. The refinements add terms to the previous statistical models to account for the physical sizes of the analyte nanoparticles, conjugated molecules, antibodies, and probe nanoparticles. Thus, a more physically realistic statistical computation can be implemented for a given sample of known qualitative composition, using the software scripts provided. Example AFM data sets, using horseradish peroxidase conjugated to gold nanoparticles, are presented to illustrate how to implement this method successfully.

  12. Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model

    Science.gov (United States)

    Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi

    2017-09-01

    Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.

  13. Examining Asymmetrical Relationships of Organizational Learning Antecedents: A Theoretical Model

    Directory of Open Access Journals (Sweden)

    Ery Tri Djatmika

    2016-02-01

    Full Text Available Global era is characterized by highly competitive advantage market demand. Responding to the challenge of rapid environmental changes, organizational learning is becoming a strategic way and solution to empower people themselves within the organization in order to create a novelty as valuable positioning source. For research purposes, determining the influential antecedents that affect organizational learning is vital to understand research-based solutions given for practical implications. Accordingly, identification of variables examined by asymmetrical relationships is critical to establish. Possible antecedent variables come from organizational and personal point of views. It is also possible to include a moderating one. A proposed theoretical model of asymmetrical effects of organizational learning and its antecedents is discussed in this article.

  14. Theoretical Analysis of Heat Stress Prefabricating the Crack in Precision Cropping

    Directory of Open Access Journals (Sweden)

    Lijun Zhang

    2013-07-01

    Full Text Available The mathematical model of the metal bar in course of heat treatment is built by regarding the convective heat transfer process of the metal bar as the heat conduction boundary condition. By the theory analysis and numerical simulation methods, the theoretical expression of unsteady multidimensional temperature field for the axisymmetric model of metal bar is obtained. Temperature field distribution of bar V-shaped notch equivalent tip is given by ANSYS software. The quantitative relationship between temperature of bar inner key points and the time is determined. Through the polynomial curve fitting, the relation between the ultimate strength and the temperature is also given. Based on it, the influences of the width of the adiabatic boundary and water velocity on the critical temperature gradient of germinating heat crack in the tip of V-shaped notch are analyzed. The experimental results in precision cropping show that the expression of unsteady multidimensional temperature field is feasible in the rapid calculation of crack generation.

  15. Chemical and morphological gradient scaffolds to mimic hierarchically complex tissues: From theoretical modeling to their fabrication.

    Science.gov (United States)

    Marrella, Alessandra; Aiello, Maurizio; Quarto, Rodolfo; Scaglione, Silvia

    2016-10-01

    Porous multiphase scaffolds have been proposed in different tissue engineering applications because of their potential to artificially recreate the heterogeneous structure of hierarchically complex tissues. Recently, graded scaffolds have been also realized, offering a continuum at the interface among different phases for an enhanced structural stability of the scaffold. However, their internal architecture is often obtained empirically and the architectural parameters rarely predetermined. The aim of this work is to offer a theoretical model as tool for the design and fabrication of functional and structural complex graded scaffolds with predicted morphological and chemical features, to overcome the time-consuming trial and error experimental method. This developed mathematical model uses laws of motions, Stokes equations, and viscosity laws to describe the dependence between centrifugation speed and fiber/particles sedimentation velocity over time, which finally affects the fiber packing, and thus the total porosity of the 3D scaffolds. The efficacy of the theoretical model was tested by realizing engineered graded grafts for osteochondral tissue engineering applications. The procedure, based on combined centrifugation and freeze-drying technique, was applied on both polycaprolactone (PCL) and collagen-type-I (COL) to test the versatility of the entire process. A functional gradient was combined to the morphological one by adding hydroxyapatite (HA) powders, to mimic the bone mineral phase. Results show that 3D bioactive morphologically and chemically graded grafts can be properly designed and realized in agreement with the theoretical model. Biotechnol. Bioeng. 2016;113: 2286-2297. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Toward a comprehensive, theoretical model of compassion fatigue: An integrative literature review.

    Science.gov (United States)

    Coetzee, Siedine K; Laschinger, Heather K S

    2018-03-01

    This study was an integrative literature review in relation to compassion fatigue models, appraising these models, and developing a comprehensive theoretical model of compassion fatigue. A systematic search on PubMed, EbscoHost (Academic Search Premier, E-Journals, Medline, PsycINFO, Health Source Nursing/Academic Edition, CINAHL, MasterFILE Premier and Health Source Consumer Edition), gray literature, and manual searches of included reference lists was conducted in 2016. The studies (n = 11) were analyzed, and the strengths and limitations of the compassion fatigue models identified. We further built on these models through the application of the conservation of resources theory and the social neuroscience of empathy. The compassion fatigue model shows that it is not empathy that puts nurses at risk of developing compassion fatigue, but rather a lack of resources, inadequate positive feedback, and the nurse's response to personal distress. By acting on these three aspects, the risk of developing compassion fatigue can be addressed, which could improve the retention of a compassionate and committed nurse workforce. © 2017 John Wiley & Sons Australia, Ltd.

  17. A theoretical model for flow boiling CHF from short concave heaters

    International Nuclear Information System (INIS)

    Galloway, J.E.; Mudawar, I.

    1995-01-01

    Experiments were performed to enable the development of a new theoretical mode for the enhancement in CHF commonly observed with flow boiling on concave heater as compared to straight heaters. High-speed video imaging and photomicrography were employed to capture the trigger mechanism for CHF each type heater. A wavy vapor layer was observed to engulf the heater surface in each case, permitting liquid access to the surface only in regions where depressions (troughs) in the liquid vapor interface made contact with the surface. CHF in each case occurred when the pressure force exerted upon the wavy vapor-liquid inter ace in the contact region could no longer overcome the momentum of the vapor produced in these regional. Shorter interfacial wavelengths with greater curvature were measured on the curve, heater than on the straight heater, promoting a greater pressure force on the wave interface and a corresponding increase in CHF for the curved heater. A theoretics. CHF model is developed from these observations, based upon a new theory for hydrodynamic instability, along a curved interface. CHF data are predicted with good accuracy for both heaters. 23 refs., 9 figs

  18. Theoretical model of an optothermal microactuator directly driven by laser beams

    International Nuclear Information System (INIS)

    Han, Xu; Zhang, Haijun; Xu, Rui; Wang, Shuying; Qin, Chun

    2015-01-01

    This paper proposes a novel method of optothermal microactuation based on single and dual laser beams (spots). The theoretical model of the optothermal temperature distribution of an expansion arm is established and simulated, indicating that the maximum temperature of the arm irradiated by dual laser spots, at the same laser power level, is much lower than that irradiated by one single spot, and thus the risk of burning out and damaging the optothermal microactuator (OTMA) can be effectively avoided. To verify the presented method, a 750 μm long OTMA with a 100 μm wide expansion arm is designed and microfabricated, and single/dual laser beams with a wavelength of 650 nm are adopted to carry out experiments. The experimental results showed that the optothermal deflection of the OTMA under the irradiation of dual laser spots is larger than that under the irradiation of a single spot with the same power, which is in accordance with theoretical prediction. This method of optothermal microactuation may expand the practical applications of microactuators, which serve as critical units in micromechanical devices and micro-opto-electro-mechanical systems (MOEMS). (paper)

  19. Quantitative Study on Corrosion of Steel Strands Based on Self-Magnetic Flux Leakage

    Directory of Open Access Journals (Sweden)

    Runchuan Xia

    2018-05-01

    Full Text Available This paper proposed a new computing method to quantitatively and non-destructively determine the corrosion of steel strands by analyzing the self-magnetic flux leakage (SMFL signals from them. The magnetic dipole model and three growth models (Logistic model, Exponential model, and Linear model were proposed to theoretically analyze the characteristic value of SMFL. Then, the experimental study on the corrosion detection by the magnetic sensor was carried out. The setup of the magnetic scanning device and signal collection method were also introduced. The results show that the Logistic Growth model is verified as the optimal model for calculating the magnetic field with good fitting effects. Combined with the experimental data analysis, the amplitudes of the calculated values (BxL(x,z curves agree with the measured values in general. This method provides significant application prospects for the evaluation of the corrosion and the residual bearing capacity of steel strand.

  20. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    Science.gov (United States)

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  1. Software for energy modelling: a theoretical basis for improvements in the user interface

    Energy Technology Data Exchange (ETDEWEB)

    Siu, Y.L.

    1989-09-01

    A philosophical critique of the relationships between theory, knowledge and practice for a range of existing energy modelling styles is presented. In particular, Habermas's ideas are invoked regarding the three spheres of cognitive interest (i.e. technical, practical and emancipatory) and three levels of understanding of knowledge, the construction of an 'ideal speech situation', and the theory of communicative competence and action. These are adopted as a basis for revealing shortcomings of a representative selection of existing computer-based energy modelling styles, and as a springboard for constructing a new theoretical approach. (author).

  2. Theoretical modeling of steam condensation in the presence of a noncondensable gas in horizontal tubes

    International Nuclear Information System (INIS)

    Lee, Kwon-Yeong; Kim, Moo Hwan; Kim, Moo Hwan

    2008-01-01

    A theoretical model was developed to investigate a steam condensation with a noncondensable gas in a horizontal tube. The heat transfer through the vapor/noncondensable gas mixture boundary layer consists of the sensible heat transfer and the latent heat transfer given up by the condensing vapor, and it must equal that from the condensate film to the tube wall. Therefore, the total heat transfer coefficient is given by the film, condensation and sensible heat transfer coefficients. The film heat transfer coefficients of the upper and lower portions of the tube were calculated separately from Rosson and Meyers (1965) correlation. The heat and mass transfer analogy was used to analyze the steam/noncondensable gas mixture boundary layer. Here, the Nusselt and Sherwood numbers in the gas phase were modified to incorporate the effects of condensate film roughness, suction, and developing flow. The predictions of the theoretical model for the experimental heat transfer coefficients at the top and bottom of the tube were reasonable. The calculated heat transfer coefficients at the top of the tube were higher than those at the bottom of it, as experimental results. As the temperature potential at the top of tube was lower than that at the bottom of it, the heat fluxes at the upper and lower portions of the tube were similar to each other. Generally speaking, however, the model predictions showed a good agreement with experimental data. The new empirical correlation proposed by Lee and Kim (2008) for the vertical tube was applied to the condensation of steam/noncondensable mixture in a horizontal tube. Nusselt theory and Chato correlation were used to calculate the heat transfer coefficients at top and bottom of the horizontal tube, respectively. The predictions of the new empirical correlation were good and very similar with the theoretical model. (author)

  3. Spectral Quantitative Analysis Model with Combining Wavelength Selection and Topology Structure Optimization

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2016-01-01

    Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.

  4. Effective Drug Delivery in Diffuse Intrinsic Pontine Glioma : A Theoretical Model to Identify Potential Candidates

    NARCIS (Netherlands)

    El-Khouly, Fatma E; van Vuurden, Dannis G; Stroink, Thom; Hulleman, Esther; Kaspers, Gertjan J L; Hendrikse, N Harry; Veldhuijzen van Zanten, Sophie E M

    2017-01-01

    Despite decades of clinical trials for diffuse intrinsic pontine glioma (DIPG), patient survival does not exceed 10% at two years post-diagnosis. Lack of benefit from systemic chemotherapy may be attributed to an intact bloodbrain barrier (BBB). We aim to develop a theoretical model including

  5. Quantitative Finance

    Science.gov (United States)

    James, Jessica

    2017-01-01

    Quantitative finance is a field that has risen to prominence over the last few decades. It encompasses the complex models and calculations that value financial contracts, particularly those which reference events in the future, and apply probabilities to these events. While adding greatly to the flexibility of the market available to corporations and investors, it has also been blamed for worsening the impact of financial crises. But what exactly does quantitative finance encompass, and where did these ideas and models originate? We show that the mathematics behind finance and behind games of chance have tracked each other closely over the centuries and that many well-known physicists and mathematicians have contributed to the field.

  6. Special course on modern theoretical and experimental approaches to turbulent flow structure and its modelling

    Energy Technology Data Exchange (ETDEWEB)

    1987-08-01

    The large eddy concept in turbulent modeling and techniques for direct simulation are discussed. A review of turbulence modeling is presented along with physical and numerical aspects and applications. A closure model for turbulent flows is presented and routes to chaos by quasi-periodicity are discussed. Theoretical aspects of transition to turbulence by space/time intermittency are covered. The application to interpretation of experimental results of fractal dimensions and connection of spatial temporal chaos are reviewed. Simulation of hydrodynamic flow by using cellular automata is discussed.

  7. Qualitative and quantitative guidelines for the comparison of environmental model predictions

    International Nuclear Information System (INIS)

    Scott, M.

    1995-03-01

    The question of how to assess or compare predictions from a number of models is one of concern in the validation of models, in understanding the effects of different models and model parameterizations on model output, and ultimately in assessing model reliability. Comparison of model predictions with observed data is the basic tool of model validation while comparison of predictions amongst different models provides one measure of model credibility. The guidance provided here is intended to provide qualitative and quantitative approaches (including graphical and statistical techniques) to such comparisons for use within the BIOMOVS II project. It is hoped that others may find it useful. It contains little technical information on the actual methods but several references are provided for the interested reader. The guidelines are illustrated on data from the VAMP CB scenario. Unfortunately, these data do not permit all of the possible approaches to be demonstrated since predicted uncertainties were not provided. The questions considered are concerned with a) intercomparison of model predictions and b) comparison of model predictions with the observed data. A series of examples illustrating some of the different types of data structure and some possible analyses have been constructed. A bibliography of references on model validation is provided. It is important to note that the results of the various techniques discussed here, whether qualitative or quantitative, should not be considered in isolation. Overall model performance must also include an evaluation of model structure and formulation, i.e. conceptual model uncertainties, and results for performance measures must be interpreted in this context. Consider a number of models which are used to provide predictions of a number of quantities at a number of time points. In the case of the VAMP CB scenario, the results include predictions of total deposition of Cs-137 and time dependent concentrations in various

  8. Theoretical Aspects of Developing the Ideological Platform for Reforming the Economy of Regions of Ukraine

    Directory of Open Access Journals (Sweden)

    Bielikova Nadiya V.

    2016-01-01

    Full Text Available The article is concerned with developing the ideological platform for reforming the economy of regions of Ukraine on the basis of theories, conceptions, approaches and methods of research of regularities in the economic development of countries and their regions. The article substantiates both theoretical basis and methodological support for developing a economic reforming mechanism; expediency of use as to certain economic theories as well as quantitative and qualitative research methods as to reforming the economy of country and its regions. Requirements for development of the ideological platform of reforming the economy of the country's regions have been defined. The article considers achievements of philosophical disciplines that form the basis for developing of the ideological platform of economic reforming: philosophical anthropology (existentialism as doctrine about the human nature (essence; social theory and the institutional theory, developed on the basis of the latter; philosophy of economics; conceptions and ideas of the philosophy of history. The theoretical basis of studying the economic reforming in Ukraine and its regions has been substantiated, which consists of the following components: reforming the model of country's economy as a whole; reforming the model of a particular region's economy; reforming the model of region's economy as a component of the national economy; reforming the model of economy of country and its regions.

  9. Combined action of ionizing radiation with another factor: common rules and theoretical approach

    International Nuclear Information System (INIS)

    Kim, Jin Kyu; Roh, Changhyun; Komarova, Ludmila N.; Petin, Vladislav G.

    2013-01-01

    Two or more factors can simultaneously make their combined effects on the biological objects. This study has focused on theoretical approach to synergistic interaction due to the combined action of radiation and another factor on cell inactivation. A mathematical model for the synergistic interaction of different environmental agents was suggested for quantitative prediction of irreversibly damaged cells after combined exposures. The model takes into account the synergistic interaction of agents and based on the supposition that additional effective damages responsible for the synergy are irreversible and originated from an interaction of ineffective sub lesions. The experimental results regarding the irreversible component of radiation damage of diploid yeast cells simultaneous exposed to heat with ionizing radiation or UV light are presented. A good agreement of experimental results with model predictions was demonstrated. The importance of the results obtained for the interpretation of the mechanism of synergistic interaction of various environmental factors is discussed. (author)

  10. Combined action of ionizing radiation with another factor: common rules and theoretical approach

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Kyu; Roh, Changhyun, E-mail: jkkim@kaeri.re.kr [Korea Atomic Energy Research Institute, Jeongeup (Korea, Republic of); Komarova, Ludmila N.; Petin, Vladislav G., E-mail: vgpetin@yahoo.com [Medical Radiological Research Center, Obninsk (Russian Federation)

    2013-07-01

    Two or more factors can simultaneously make their combined effects on the biological objects. This study has focused on theoretical approach to synergistic interaction due to the combined action of radiation and another factor on cell inactivation. A mathematical model for the synergistic interaction of different environmental agents was suggested for quantitative prediction of irreversibly damaged cells after combined exposures. The model takes into account the synergistic interaction of agents and based on the supposition that additional effective damages responsible for the synergy are irreversible and originated from an interaction of ineffective sub lesions. The experimental results regarding the irreversible component of radiation damage of diploid yeast cells simultaneous exposed to heat with ionizing radiation or UV light are presented. A good agreement of experimental results with model predictions was demonstrated. The importance of the results obtained for the interpretation of the mechanism of synergistic interaction of various environmental factors is discussed. (author)

  11. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  12. Quantitative theoretical analysis of lifetimes and decay rates relevant in laser cooling BaH

    Science.gov (United States)

    Moore, Keith; Lane, Ian C.

    2018-05-01

    Tiny radiative losses below the 0.1% level can prove ruinous to the effective laser cooling of a molecule. In this paper the laser cooling of a hydride is studied with rovibronic detail using ab initio quantum chemistry in order to document the decays to all possible electronic states (not just the vibrational branching within a single electronic transition) and to identify the most populated final quantum states. The effect of spin-orbit and associated couplings on the properties of the lowest excited states of BaH are analysed in detail. The lifetimes of the A2Π1/2, H2Δ3/2 and E2Π1/2 states are calculated (136 ns, 5.8 μs and 46 ns respectively) for the first time, while the theoretical value for B2 Σ1/2+ is in good agreement with experiments. Using a simple rate model the numbers of absorption-emission cycles possible for both one- and two-colour cooling on the competing electronic transitions are determined, and it is clearly demonstrated that the A2Π - X2Σ+ transition is superior to B2Σ+ - X2Σ+ , where multiple tiny decay channels degrade its efficiency. Further possible improvements to the cooling method are proposed.

  13. The Effect of Private Benefits of Control on Minority Shareholders: A Theoretical Model and Empirical Evidence from State Ownership

    Directory of Open Access Journals (Sweden)

    Kerry Liu

    2017-06-01

    Full Text Available Purpose: The purpose of this paper is to examine the effect of private benefits of control on minority shareholders. Design/methodology/approach: A theoretical model is established. The empirical analysis includes hand-collected data from a wide range of data sources. OLS and 2SLS regression analysis are applied with Huber-White standard errors. Findings: The theoretical model shows that, while private benefits are generally harmful to minority shareholders, the overall effect depends on the size of large shareholder ownership. The empirical evidence from government ownership is consistent with theoretical analysis. Research limitations/implications: The empirical evidence is based on a small number of hand-collected data sets of government ownership. Further studies can be expanded to other types of ownership, such as family ownership and financial institutional ownership. Originality/value: This study is the first to theoretically analyse and empirically test the effect of private benefits. In general, this study significantly contributes to the understanding of the effect of large shareholder and corporate governance.

  14. Theoretical basis of the new particles

    International Nuclear Information System (INIS)

    Rujula, A.

    1977-01-01

    The four-quark standard gauge field theory of weak, electromagnetic and strong interactions is reviewed and placed into a historical perspective since as early as 1961. Theoretical predictions of the model are compared to experimental observations available as of the Conference date, charm production in e + e - annihilation being in the spotlight. Virtues and shortcomings of the standard model are discussed. The model is concluded to have been an incredibly successful predictive tool. Some theoretical developments around the standard model are also discussed in view of CP violation in SU(2)xU(1) gauge theories, the Higgs' bosons and superunification of weak, strong and electromagnetic interactions

  15. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China*

    Science.gov (United States)

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-01-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749

  16. Mathematical and theoretical neuroscience cell, network and data analysis

    CERN Document Server

    Nieus, Thierry

    2017-01-01

    This volume gathers contributions from theoretical, experimental and computational researchers who are working on various topics in theoretical/computational/mathematical neuroscience. The focus is on mathematical modeling, analytical and numerical topics, and statistical analysis in neuroscience with applications. The following subjects are considered: mathematical modelling in Neuroscience, analytical  and numerical topics;  statistical analysis in Neuroscience; Neural Networks; Theoretical Neuroscience. The book is addressed to researchers involved in mathematical models applied to neuroscience.

  17. A theoretical model for analysing gender bias in medicine.

    Science.gov (United States)

    Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina

    2009-08-03

    During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  18. Fixing the cracks in the crystal ball: A maturity model for quantitative risk assessment

    International Nuclear Information System (INIS)

    Rae, Andrew; Alexander, Rob; McDermid, John

    2014-01-01

    Quantitative risk assessment (QRA) is widely practiced in system safety, but there is insufficient evidence that QRA in general is fit for purpose. Defenders of QRA draw a distinction between poor or misused QRA and correct, appropriately used QRA, but this distinction is only useful if we have robust ways to identify the flaws in an individual QRA. In this paper we present a comprehensive maturity model for QRA which covers all the potential flaws discussed in the risk assessment literature and in a collection of risk assessment peer reviews. We provide initial validation of the completeness and realism of the model. Our risk assessment maturity model provides a way to prioritise both process development within an organisation and empirical research within the QRA community. - Highlights: • Quantitative risk assessment (QRA) is widely practiced, but there is insufficient evidence that it is fit for purpose. • A given QRA may be good, or it may not – we need systematic ways to distinguish this. • We have created a maturity model for QRA which covers all the potential flaws discussed in the risk assessment literature. • We have provided initial validation of the completeness and realism of the model. • The maturity model can also be used to prioritise QRA research discipline-wide

  19. Discussions on the non-equilibrium effects in the quantitative phase field model of binary alloys

    International Nuclear Information System (INIS)

    Zhi-Jun, Wang; Jin-Cheng, Wang; Gen-Cang, Yang

    2010-01-01

    All the quantitative phase field models try to get rid of the artificial factors of solutal drag, interface diffusion and interface stretch in the diffuse interface. These artificial non-equilibrium effects due to the introducing of diffuse interface are analysed based on the thermodynamic status across the diffuse interface in the quantitative phase field model of binary alloys. Results indicate that the non-equilibrium effects are related to the negative driving force in the local region of solid side across the diffuse interface. The negative driving force results from the fact that the phase field model is derived from equilibrium condition but used to simulate the non-equilibrium solidification process. The interface thickness dependence of the non-equilibrium effects and its restriction on the large scale simulation are also discussed. (cross-disciplinary physics and related areas of science and technology)

  20. Team Resilience as a Second-Order Emergent State: A Theoretical Model and Research Directions

    Directory of Open Access Journals (Sweden)

    Clint Bowers

    2017-08-01

    Full Text Available Resilience has been recognized as an important phenomenon for understanding how individuals overcome difficult situations. However, it is not only individuals who face difficulties; it is not uncommon for teams to experience adversity. When they do, they must be able to overcome these challenges without performance decrements.This manuscript represents a theoretical model that might be helpful in conceptualizing this important construct. Specifically, it describes team resilience as a second-order emergent state. We also include research propositions that follow from the model.

  1. Theoretical modeling of the plasma-assisted catalytic growth and field emission properties of graphene sheet

    International Nuclear Information System (INIS)

    Sharma, Suresh C.; Gupta, Neha

    2015-01-01

    A theoretical modeling for the catalyst-assisted growth of graphene sheet in the presence of plasma has been investigated. It is observed that the plasma parameters can strongly affect the growth and field emission properties of graphene sheet. The model developed accounts for the charging rate of the graphene sheet; number density of electrons, ions, and neutral atoms; various elementary processes on the surface of the catalyst nanoparticle; surface diffusion and accretion of ions; and formation of carbon-clusters and large graphene islands. In our investigation, it is found that the thickness of the graphene sheet decreases with the plasma parameters, number density of hydrogen ions and RF power, and consequently, the field emission of electrons from the graphene sheet surface increases. The time evolution of the height of graphene sheet with ion density and sticking coefficient of carbon species has also been examined. Some of our theoretical results are in compliance with the experimental observations

  2. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  3. Quantitative model for the blood pressure‐lowering interaction of valsartan and amlodipine

    Science.gov (United States)

    Heo, Young‐A; Holford, Nick; Kim, Yukyung; Son, Mijeong

    2016-01-01

    Aims The objective of this study was to develop a population pharmacokinetic (PK) and pharmacodynamic (PD) model to quantitatively describe the antihypertensive effect of combined therapy with amlodipine and valsartan. Methods PK modelling was used with data collected from 48 healthy volunteers receiving a single dose of combined formulation of 10 mg amlodipine and 160 mg valsartan. Systolic (SBP) and diastolic blood pressure (DBP) were recorded during combined administration. SBP and DBP data for each drug alone were gathered from the literature. PKPD models of each drug and for combined administration were built with NONMEM 7.3. Results A two‐compartment model with zero order absorption best described the PK data of both drugs. Amlodipine and valsartan monotherapy effects on SBP and DBP were best described by an I max model with an effect compartment delay. Combined therapy was described using a proportional interaction term as follows: (D1 + D2) +ALPHA×(D1 × D2). D1 and D2 are the predicted drug effects of amlodipine and valsartan monotherapy respectively. ALPHA is the interaction term for combined therapy. Quantitative estimates of ALPHA were −0.171 (95% CI: −0.218, −0.143) for SBP and −0.0312 (95% CI: −0.07739, −0.00283) for DBP. These infra‐additive interaction terms for both SBP and DBP were consistent with literature results for combined administration of drugs in these classes. Conclusion PKPD models for SBP and DBP successfully described the time course of the antihypertensive effects of amlodipine and valsartan. An infra‐additive interaction between amlodipine and valsartan when used in combined administration was confirmed and quantified. PMID:27504853

  4. Tannin structural elucidation and quantitative ³¹P NMR analysis. 1. Model compounds.

    Science.gov (United States)

    Melone, Federica; Saladino, Raffaele; Lange, Heiko; Crestini, Claudia

    2013-10-02

    Tannins and flavonoids are secondary metabolites of plants that display a wide array of biological activities. This peculiarity is related to the inhibition of extracellular enzymes that occurs through the complexation of peptides by tannins. Not only the nature of these interactions, but more fundamentally also the structure of these heterogeneous polyphenolic molecules are not completely clear. This first paper describes the development of a new analytical method for the structural characterization of tannins on the basis of tannin model compounds employing an in situ labeling of all labile H groups (aliphatic OH, phenolic OH, and carboxylic acids) with a phosphorus reagent. The ³¹P NMR analysis of ³¹P-labeled samples allowed the unprecedented quantitative and qualitative structural characterization of hydrolyzable tannins, proanthocyanidins, and catechin tannin model compounds, forming the foundations for the quantitative structural elucidation of a variety of actual tannin samples described in part 2 of this series.

  5. Theoretical modeling and experimental validation of a torsional piezoelectric vibration energy harvesting system

    Science.gov (United States)

    Qian, Feng; Zhou, Wanlu; Kaluvan, Suresh; Zhang, Haifeng; Zuo, Lei

    2018-04-01

    Vibration energy harvesting has been extensively studied in recent years to explore a continuous power source for sensor networks and low-power electronics. Torsional vibration widely exists in mechanical engineering; however, it has not yet been well exploited for energy harvesting. This paper presents a theoretical model and an experimental validation of a torsional vibration energy harvesting system comprised of a shaft and a shear mode piezoelectric transducer. The piezoelectric transducer position on the surface of the shaft is parameterized by two variables that are optimized to obtain the maximum power output. The piezoelectric transducer can work in d 15 mode (pure shear mode), coupled mode of d 31 and d 33, and coupled mode of d 33, d 31 and d 15, respectively, when attached at different angles. Approximate expressions of voltage and power are derived from the theoretical model, which gave predictions in good agreement with analytical solutions. Physical interpretations on the implicit relationship between the power output and the position parameters of the piezoelectric transducer is given based on the derived approximate expression. The optimal position and angle of the piezoelectric transducer is determined, in which case, the transducer works in the coupled mode of d 15, d 31 and d 33.

  6. Within tree variation of lignin, extractives, and microfibril angle coupled with the theoretical and near infrared modeling of microfibril angle

    Science.gov (United States)

    Brian K. Via; chi L. So; Leslie H. Groom; Todd F. Shupe; michael Stine; Jan. Wikaira

    2007-01-01

    A theoretical model was built predicting the relationship between microfibril angle and lignin content at the Angstrom (A) level. Both theoretical and statistical examination of experimental data supports a square root transformation of lignin to predict microfibril angle. The experimental material used came from 10 longleaf pine (Pinus palustris)...

  7. A unified theoretical framework for mapping models for the multi-state Hamiltonian.

    Science.gov (United States)

    Liu, Jian

    2016-11-28

    We propose a new unified theoretical framework to construct equivalent representations of the multi-state Hamiltonian operator and present several approaches for the mapping onto the Cartesian phase space. After mapping an F-dimensional Hamiltonian onto an F+1 dimensional space, creation and annihilation operators are defined such that the F+1 dimensional space is complete for any combined excitation. Commutation and anti-commutation relations are then naturally derived, which show that the underlying degrees of freedom are neither bosons nor fermions. This sets the scene for developing equivalent expressions of the Hamiltonian operator in quantum mechanics and their classical/semiclassical counterparts. Six mapping models are presented as examples. The framework also offers a novel way to derive such as the well-known Meyer-Miller model.

  8. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    Science.gov (United States)

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.

  9. Bidirectional interconversion of stem and non-stem cancer cell populations: A reassessment of theoretical models for tumor heterogeneity

    NARCIS (Netherlands)

    van Neerven, Sanne M.; Tieken, Mathijs; Vermeulen, Louis; Bijlsma, Maarten F.

    2016-01-01

    Resolving the origin of intratumor heterogeneity has proven to be one of the central challenges in cancer research during recent years. Two theoretical models explaining the emergence of intratumor heterogeneity have come to dominate cancer biology literature: the clonal evolution model and the

  10. Quantitative plasma spectroscopy at JET and Extrap-T1

    International Nuclear Information System (INIS)

    Zastrow, K.D.

    1993-01-01

    Studies in quantitative plasma spectroscopy are performed on the Joint European Torus (JET) in Culham, Great-Britain and on the Extrap-T1 reversed-field pinch (RFP) in Stockholm. The model concepts that form the basis of these studies are reviewed. At JET, spectra of He-like nickel are observed with a high-resolution X-ray crystal spectrometer. The experimental line intensity ratios of satellite lines to the resonance line are compared with theoretical data. The agreement is found to be good, with the exception of the excitation of dipole-forbidden lines. The spectrum is also used to derive central ion temperature, central toroidal rotation and nickel concentration based upon a model for the radial emission. The results are compared with those from an independent diagnostic, charge-exchange recombination spectroscopy (CWRS). Theoretically predicted cross section effects on the CXRS data are verified. On Extrap-T1, vacuum ultraviolet (VUV) spectra and visible spectra are analysed. From these, thermodynamic quantities of the plasma are derived, like electron temperature, impurity concentrations and particle fluxes. The oxygen ionization balance is measured and compared to calculations with a collisional-dielectronic model with metastable resolution, both in 0-dimensional time-dependent and transport model calculations. The performance of the RFP discharges investigated in terms of radiative power loss and energy and particle confinement properties. The scaling of the energy confinement time with plasma current, pinch parameter and electron density is found to be dominated by the dynamo activity needed to sustain the RFP configuration. The scaling of the particle confinement time, on the other hand, is dominated by pressure-driven activity associated with the regulation of β

  11. Towards Quantitative Spatial Models of Seabed Sediment Composition.

    Directory of Open Access Journals (Sweden)

    David Stephens

    Full Text Available There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom's parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method.

  12. The dynamics of the nuclear disassembly in a field-theoretical model at finite entropies

    International Nuclear Information System (INIS)

    Knoll, J.; Strack, B.

    1984-10-01

    The expansion phase of a hot nuclear system as created in an energetic heavy-ion collision is calculated and discussed by a selfconsistent field-theoretical model. Dynamical instabilities arising during the expansion from strong fluctuations of the one-body density are included explicitely. First multiplicity distributions and mass spectra resulting from a series of numerical runs in a 2+1 dimensional model world are presented. The dependence of break-up dynamics both on the properties of the binding force and possible correlations in the initially compressed hot state are discussed. (orig.)

  13. Does the U.S. exercise contagion on Italy? A theoretical model and empirical evidence

    Science.gov (United States)

    Cerqueti, Roy; Fenga, Livio; Ventura, Marco

    2018-06-01

    This paper deals with the theme of contagion in financial markets. At this aim, we develop a model based on Mixed Poisson Processes to describe the abnormal returns of financial markets of two considered countries. In so doing, the article defines the theoretical conditions to be satisfied in order to state that one of them - the so-called leader - exercises contagion on the others - the followers. Specifically, we employ an invariant probabilistic result stating that a suitable transformation of a Mixed Poisson Process is still a Mixed Poisson Process. The theoretical claim is validated by implementing an extensive simulation analysis grounded on empirical data. The countries considered are the U.S. (as the leader) and Italy (as the follower) and the period under scrutiny is very large, ranging from 1970 to 2014.

  14. Theoretical high energy physics

    International Nuclear Information System (INIS)

    Lee, T.D.

    1990-05-01

    This report discusses progress on theoretical high energy physics at Columbia University in New York City. Some of the topics covered are: Chern-Simons gauge field theories; dynamical fermion QCD calculations; lattice gauge theory; the standard model of weak and electromagnetic interactions; Boson-fermion model of cuprate superconductors; S-channel theory of superconductivity and axial anomaly and its relation to spin in the parton model

  15. THEORETICAL MODELING OF THE FEEDBACK STABILIZATION OF EXTERNAL MHD MODES IN TOROIDAL GEOMETRY

    International Nuclear Information System (INIS)

    CHANCE, M.S.; CHU, M.S.; OKABAYASHI, M.; TURNBULL, A.D.

    2001-02-01

    OAK-B135 A theoretical framework for understanding the feedback mechanism against external MHD modes has been formulated. Efficient computational tools--the GATO stability code coupled with a substantially modified VACUUM code--have been developed to effectively design viable feedback systems against these modes. The analysis assumed a thin resistive shell and a feedback coil structure accurately modeled in θ, with only a single harmonic variation in φ. Time constants and induced currents in the enclosing resistive shell are calculated. An optimized configuration based on an idealized model have been computed for the DIII-D device. Up to 90% of the effectiveness of an ideal wall can be achieved

  16. Experimental Investigation and Theoretical Modeling of Nanosilica Activity in Concrete

    Directory of Open Access Journals (Sweden)

    Han-Seung Lee

    2014-01-01

    Full Text Available This paper presents experimental investigations and theoretical modeling of the hydration reaction of nanosilica blended concrete with different water-to-binder ratios and different nanosilica replacement ratios. The developments of chemically bound water contents, calcium hydroxide contents, and compressive strength of Portland cement control specimens and nanosilica blended specimens were measured at different ages: 1 day, 3 days, 7 days, 14 days, and 28 days. Due to the pozzolanic reaction of nanosilica, the contents of calcium hydroxide in nanosilica blended pastes are considerably lower than those in the control specimens. Compared with the control specimens, the extent of compressive strength enhancement in the nanosilica blended specimens is much higher at early ages. Additionally, a blended cement hydration model that considers both the hydration reaction of cement and the pozzolanic reaction of nanosilica is proposed. The properties of nanosilica blended concrete during hardening were evaluated using the degree of hydration of cement and the reaction degree of nanosilica. The calculated chemically bound water contents, calcium hydroxide contents, and compressive strength were generally consistent with the experimental results.

  17. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  18. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    Science.gov (United States)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  19. New theoretical model for two-phase flow discharged from stratified two-phase region through small break

    International Nuclear Information System (INIS)

    Yonomoto, Taisuke; Tasaka, Kanji

    1988-01-01

    A theoretical and experimental study was conducted to understand two-phase flow discharged from a stratified two-phase region through a small break. This problem is important for an analysis of a small break loss-of-coolant accident (LOCA) in a light water reactor (LWR). The present theoretical results show that a break quality is a function of h/h b , where h is the elevation difference between a bulk water level in the upstream region and break and b the suffix for entrainment initiation. This result is consistent with existing eperimental results in literature. An air-water experiment was also conducted changing a break orientation as an experimental parameter to develop and assess the model. Comparisons between the model and the experimental results show that the present model can satisfactorily predict the flow rate and the quality at the break without using any adjusting constant when liquid entrainment occurs in a stratified two-phase region. When gas entrainment occurs, the experimental data are correlated well by using a single empirical constant. (author)

  20. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    Science.gov (United States)

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method