Gramatica, Paola; Papa, Ester; Marrocchi, Assunta; Minuti, Lucio; Taticchi, Aldo
2007-03-01
Various polycyclic aromatic hydrocarbons (PAHs), ubiquitous environmental pollutants, are recognized mutagens and carcinogens. A homogeneous set of mutagenicity data (TA98 and TA100,+S9) for 32 benzocyclopentaphenanthrenes/chrysenes was modeled by the quantitative structure-activity relationship classification methods k-nearest neighbor and classification and regression tree, using theoretical holistic molecular descriptors. Genetic algorithm provided the selection of the best subset of variables for modeling mutagenicity. The models were validated by leave-one-out and leave-50%-out approaches and have good performance, with sensitivity and specificity ranges of 90-100%. Mutagenicity assessment for these PAHs requires only a few theoretical descriptors of their molecular structure.
Hardcastle, T. P.; Seabourne, C. R.; Kepaptsoglou, D. M.; Susi, T.; Nicholls, R. J.; Brydson, R. M. D.; Scott, A. J.; Ramasse, Q. M.
2017-06-01
Electron energy loss spectroscopy (EELS) is a powerful tool for understanding the chemical structure of materials down to the atomic level, but challenges remain in accurately and quantitatively modelling the response. We compare comprehensive theoretical density functional theory (DFT) calculations of 1s core-level EEL K-edge spectra of pure, B-doped and N-doped graphene with and without a core-hole to previously published atomic-resolution experimental electron microscopy data. The ground state approximation is found in this specific system to perform consistently better than the frozen core-hole approximation. The impact of including or excluding a core-hole on the resultant theoretical band structures, densities of states, electron densities and EEL spectra were all thoroughly examined and compared. It is concluded that the frozen core-hole approximation exaggerates the effects of the core-hole in graphene and should be discarded in favour of the ground state approximation. These results are interpreted as an indicator of the overriding need for theorists to embrace many-body effects in the pursuit of accuracy in theoretical spectroscopy instead of a system-tailored approach whose approximations are selected empirically.
Directory of Open Access Journals (Sweden)
G. J. Pelgrim
2016-01-01
Full Text Available Technological advances in magnetic resonance imaging (MRI and computed tomography (CT, including higher spatial and temporal resolution, have made the prospect of performing absolute myocardial perfusion quantification possible, previously only achievable with positron emission tomography (PET. This could facilitate integration of myocardial perfusion biomarkers into the current workup for coronary artery disease (CAD, as MRI and CT systems are more widely available than PET scanners. Cardiac PET scanning remains expensive and is restricted by the requirement of a nearby cyclotron. Clinical evidence is needed to demonstrate that MRI and CT have similar accuracy for myocardial perfusion quantification as PET. However, lack of standardization of acquisition protocols and tracer kinetic model selection complicates comparison between different studies and modalities. The aim of this overview is to provide insight into the different tracer kinetic models for quantitative myocardial perfusion analysis and to address typical implementation issues in MRI and CT. We compare different models based on their theoretical derivations and present the respective consequences for MRI and CT acquisition parameters, highlighting the interplay between tracer kinetic modeling and acquisition settings.
Valuation theoretic and model theoretic aspects of local uniformization
Kuhlmann, Franz-Viktor
2010-01-01
This paper gives a survey on a valuation theoretical approach to local uniformization in positive characteristic, the model theory of valued fields in positive characteristic, and their connection with the valuation theoretical phenomenon of defect.
Theoretical aspects of spatial-temporal modeling
Matsui, Tomoko
2015-01-01
This book provides a modern introductory tutorial on specialized theoretical aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter provides up-to-date coverage of particle association measures that underpin the theoretical properties of recently developed random set methods in space and time otherwise known as the class of probability hypothesis density framework (PHD filters). The second chapter gives an overview of recent advances in Monte Carlo methods for Bayesian filtering in high-dimensional spaces. In particular, the chapter explains how one may extend classical sequential Monte Carlo methods for filtering and static inference problems to high dimensions and big-data applications. The third chapter presents an overview of generalized families of processes that extend the class of Gaussian process models to heavy-tailed families known as alph...
Quantitative Theoretical and Conceptual Framework Use in Agricultural Education Research
Kitchel, Tracy; Ball, Anna L.
2014-01-01
The purpose of this philosophical paper was to articulate the disciplinary tenets for consideration when using theory in agricultural education quantitative research. The paper clarified terminology around the concept of theory in social sciences and introduced inaccuracies of theory use in agricultural education quantitative research. Finally,…
Compositional and Quantitative Model Checking
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand
2010-01-01
This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...
Theoretical Models for Orthogonal Cutting
DEFF Research Database (Denmark)
De Chiffre, Leonardo
This review of simple models for orthogonal cutting was extracted from: “L. De Chiffre: Metal Cutting Mechanics and Applications, D.Sc. Thesis, Technical University of Denmark, 1990.”......This review of simple models for orthogonal cutting was extracted from: “L. De Chiffre: Metal Cutting Mechanics and Applications, D.Sc. Thesis, Technical University of Denmark, 1990.”...
Quantitative reactive modeling and verification.
Henzinger, Thomas A
Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness, which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.
Information-Theoretic Perspectives on Geophysical Models
Nearing, Grey
2016-04-01
To test any hypothesis about any dynamic system, it is necessary to build a model that places that hypothesis into the context of everything else that we know about the system: initial and boundary conditions and interactions between various governing processes (Hempel and Oppenheim, 1948, Cartwright, 1983). No hypothesis can be tested in isolation, and no hypothesis can be tested without a model (for a geoscience-related discussion see Clark et al., 2011). Science is (currently) fundamentally reductionist in the sense that we seek some small set of governing principles that can explain all phenomena in the universe, and such laws are ontological in the sense that they describe the object under investigation (Davies, 1990 gives several competing perspectives on this claim). However, since we cannot build perfect models of complex systems, any model that does not also contain an epistemological component (i.e., a statement, like a probability distribution, that refers directly to the quality of of the information from the model) is falsified immediately (in the sense of Popper, 2002) given only a small number of observations. Models necessarily contain both ontological and epistemological components, and what this means is that the purpose of any robust scientific method is to measure the amount and quality of information provided by models. I believe that any viable philosophy of science must be reducible to this statement. The first step toward a unified theory of scientific models (and therefore a complete philosophy of science) is a quantitative language that applies to both ontological and epistemological questions. Information theory is one such language: Cox' (1946) theorem (see Van Horn, 2003) tells us that probability theory is the (only) calculus that is consistent with Classical Logic (Jaynes, 2003; chapter 1), and information theory is simply the integration of convex transforms of probability ratios (integration reduces density functions to scalar
Modeling business processes: theoretical and practical aspects
Directory of Open Access Journals (Sweden)
V.V. Dubininа
2015-06-01
Full Text Available The essence of process-oriented enterprise management has been examined in the article. The content and types of information technology have been analyzed in the article, due to the complexity and differentiation of existing methods, as well as the specificity of language, terminology of the enterprise business processes modeling. The theoretical aspects of business processes modeling have been reviewed and the modern traditional modeling techniques received practical application in the visualization model of retailers activity have been studied in the article. In the process of theoretical analysis of the modeling methods found that UFO-toolkit method that has been developed by Ukrainian scientists due to it systemology integrated opportunities, is the most suitable for structural and object analysis of retailers business processes. It was designed visualized simulation model of the business process "sales" as is" of retailers using a combination UFO-elements with the aim of the further practical formalization and optimization of a given business process.
A theoretical model of multielectrode DBR lasers
DEFF Research Database (Denmark)
Pan, Xing; Olesen, Henning; Tromborg, Bjarne
1988-01-01
A theoretical model for two- and three-section tunable distributed Bragg reflector (DBR) lasers is presented. The static tuning properties are studied in terms of threshold current, linewidth, oscillation frequency, and output power. Regions of continuous tuning for three-section DBR lasers...
Explaining religiosity: towards a unified theoretical model.
Stolz, Jörg
2009-06-01
The article presents a unified theoretical model, explaining differences in Christian and 'alternative' religiosity at individual and collective levels. The model reconstructs and integrates the most important theories explaining religiosity (deprivation, regulation, socialization, cultural production, and ethnicity) as complementary causal mechanisms in a rational-action based framework. It is maintained that the mechanisms of the various theories are not exclusive, but complementary, and that integration into the general model is both theoretically and empirically beneficial. The model is tested on representative data from Switzerland. Substantively, I find for the Swiss case that Christian religiosity can be best explained by a religious socialization mechanism. The most important mechanisms accounting for alternative religiosity involve deprivation, gender, and age.
Integrating Theoretical Models with Functional Neuroimaging.
Pratte, Michael S; Tong, Frank
2017-02-01
The development of mathematical models to characterize perceptual and cognitive processes dates back almost to the inception of the field of psychology. Since the 1990s, human functional neuroimaging has provided for rapid empirical and theoretical advances across a variety of domains in cognitive neuroscience. In more recent work, formal modeling and neuroimaging approaches are being successfully combined, often producing models with a level of specificity and rigor that would not have been possible by studying behavior alone. In this review, we highlight examples of recent studies that utilize this combined approach to provide novel insights into the mechanisms underlying human cognition. The studies described here span domains of perception, attention, memory, categorization, and cognitive control, employing a variety of analytic and model-inspired approaches. Across these diverse studies, a common theme is that individually tailored, creative solutions are often needed to establish compelling links between multi-parameter models and complex sets of neural data. We conclude that future developments in model-based cognitive neuroscience will have great potential to advance our theoretical understanding and ability to model both low-level and high-level cognitive processes.
Murtonen, Mari
2015-01-01
University research education in many disciplines is frequently confronted by problems with students' weak level of understanding of research concepts. A mind map technique was used to investigate how students understand central methodological concepts of empirical, theoretical, qualitative and quantitative. The main hypothesis was that some…
Tesla coil theoretical model and experimental verification
Voitkans, Janis; Voitkans, Arnis
2014-01-01
Abstract – In this paper a theoretical model of a Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wired format, where the line voltage is measured against electrically neutral space. It is shown that equivalent two-wired scheme can be found for a single-wired scheme and already known long line theory can be applied to a Tesla coil. Formulas for calculation of voltage in a Tesla coil by coordinate and calculation of resonance fre...
A Theoretical Model of Water and Trade
Dang, Q.; Konar, M.; Reimer, J.; Di Baldassarre, G.; Lin, X.; Zeng, R.
2015-12-01
Water is an essential factor of agricultural production. Agriculture, in turn, is globalized through the trade of food commodities. In this paper, we develop a theoretical model of a small open economy that explicitly incorporates water resources. The model emphasizes three tradeoffs involving water decision-making that are important yet not always considered within the existing literature. One tradeoff focuses on competition for water among different sectors when there is a shock to one of the sectors only, such as trade liberalization and consequent higher demand for the product. A second tradeoff concerns the possibility that there may or may not be substitutes for water, such as increased use of sophisticated irrigation technology as a means to increase crop output in the absence of higher water availability. A third tradeoff explores the possibility that the rest of the world can be a source of supply or demand for a country's water-using products. A number of propositions are proven. For example, while trade liberalization tends to increase water use, increased pressure on water supplies can be moderated by way of a tax that is derivable with observable economic phenomena. Another example is that increased riskiness of water availability tends to cause water users to use less water than would be the case under profit maximization. These theoretical model results generate hypotheses that can be tested empirically in future work.
A Game Theoretic Model of Thermonuclear Cyberwar
Energy Technology Data Exchange (ETDEWEB)
Soper, Braden C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-08-23
In this paper we propose a formal game theoretic model of thermonuclear cyberwar based on ideas found in [1] and [2]. Our intention is that such a game will act as a first step toward building more complete formal models of Cross-Domain Deterrence (CDD). We believe the proposed thermonuclear cyberwar game is an ideal place to start on such an endeavor because the game can be fashioned in a way that is closely related to the classical models of nuclear deterrence [4–6], but with obvious modifications that will help to elucidate the complexities introduced by a second domain. We start with the classical bimatrix nuclear deterrence game based on the game of chicken, but introduce uncertainty via a left-of-launch cyber capability that one or both players may possess.
eudiometric theoretic eudiometric theoretic-approach to modelling
African Journals Online (AJOL)
eobe
2016-01-01
Jan 1, 2016 ... applied Cluster Analysis (CA), Discriminant Analysis. (DA) and Principle component Analysis (PCA) to evaluate the variations in the water .... build a machine that can replicate the perceived natural process (model building). The realization is a sample obtained from the polluted river. In the natural setting, ...
A theoretical model of water and trade
Dang, Qian; Konar, Megan; Reimer, Jeffrey J.; Di Baldassarre, Giuliano; Lin, Xiaowen; Zeng, Ruijie
2016-03-01
Water is an essential input for agricultural production. Agriculture, in turn, is globalized through the trade of agricultural commodities. In this paper, we develop a theoretical model that emphasizes four tradeoffs involving water-use decision-making that are important yet not always considered in a consistent framework. One tradeoff focuses on competition for water among different economic sectors. A second tradeoff examines the possibility that certain types of agricultural investments can offset water use. A third tradeoff explores the possibility that the rest of the world can be a source of supply or demand for a country's water-using commodities. The fourth tradeoff concerns how variability in water supplies influences farmer decision-making. We show conditions under which trade liberalization affect water use. Two policy scenarios to reduce water use are evaluated. First, we derive a target tax that reduces water use without offsetting the gains from trade liberalization, although important tradeoffs exist between economic performance and resource use. Second, we show how subsidization of water-saving technologies can allow producers to use less water without reducing agricultural production, making such subsidization an indirect means of influencing water use decision-making. Finally, we outline conditions under which riskiness of water availability affects water use. These theoretical model results generate hypotheses that can be tested empirically in future work.
10th Colloquium on Theoretical and Quantitative Geography 6-11th September 1997
Directory of Open Access Journals (Sweden)
1997-09-01
Full Text Available After Strasbourg, 1978, Cambridge, 1980, Augsburg, 1982, Veldhoven, 1985, Bardonechia, 1987, Chantilly, 1989, Stockholm, 1991, Budapest, 1993, Spa, 1995, the 10th Colloquium on Theoretical and Quantitative Geography was held in Rostock, Germany, from 6 to 11th September 1997. The local organizer was Otti Margraf, from Leipzig University. We can hardly convey an idea of the atmosphere which illuminated our pilgrimage to Von Thünen’s farm in Tellow, a central place for geographers! But you will...
First Principles Quantitative Modeling of Molecular Devices
Ning, Zhanyu
In this thesis, we report theoretical investigations of nonlinear and nonequilibrium quantum electronic transport properties of molecular transport junctions from atomistic first principles. The aim is to seek not only qualitative but also quantitative understanding of the corresponding experimental data. At present, the challenges to quantitative theoretical work in molecular electronics include two most important questions: (i) what is the proper atomic model for the experimental devices? (ii) how to accurately determine quantum transport properties without any phenomenological parameters? Our research is centered on these questions. We have systematically calculated atomic structures of the molecular transport junctions by performing total energy structural relaxation using density functional theory (DFT). Our quantum transport calculations were carried out by implementing DFT within the framework of Keldysh non-equilibrium Green's functions (NEGF). The calculated data are directly compared with the corresponding experimental measurements. Our general conclusion is that quantitative comparison with experimental data can be made if the device contacts are correctly determined. We calculated properties of nonequilibrium spin injection from Ni contacts to octane-thiolate films which form a molecular spintronic system. The first principles results allow us to establish a clear physical picture of how spins are injected from the Ni contacts through the Ni-molecule linkage to the molecule, why tunnel magnetoresistance is rapidly reduced by the applied bias in an asymmetric manner, and to what extent ab initio transport theory can make quantitative comparisons to the corresponding experimental data. We found that extremely careful sampling of the two-dimensional Brillouin zone of the Ni surface is crucial for accurate results in such a spintronic system. We investigated the role of contact formation and its resulting structures to quantum transport in several molecular
Quantitative sociodynamics stochastic methods and models of social interaction processes
Helbing, Dirk
1995-01-01
Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...
Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes
Helbing, Dirk
2010-01-01
This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...
Wilson, L Y; Famini, G R
1991-05-01
The application of computational techniques to medicinal chemistry is growing at a tremendous rate. Quantitative structure-activity relationships (QSAR), which relate biological and toxicological activities to structural features, have been employed widely to correlate structure to activity. A difficulty of this approach has been nonuniformity of parameter sets and the inability to examine contributions across properties and data sets. Linear solvation energy relationships (LSER) developed by Kamlet and Taft circumvent many of the difficulties and successfully utilize a single set of parameters for a wide range of physical, chemical, and biological properties. We have replaced the LSER solvato-chromatic parameters with theoretically determined parameters to permit better a priori prediction of properties. Comparison of the two parameter sets for five biological activities is presented, showing the excellent fit of the theoretically determined parameters.
Graph theoretical model of a sensorimotor connectome in zebrafish.
Directory of Open Access Journals (Sweden)
Michael Stobb
Full Text Available Mapping the detailed connectivity patterns (connectomes of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.
Quantitative structure - mesothelioma potency model ...
Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar
Explaining clinical behaviors using multiple theoretical models
Directory of Open Access Journals (Sweden)
Eccles Martin P
2012-10-01
Full Text Available Abstract Background In the field of implementation research, there is an increased interest in use of theory when designing implementation research studies involving behavior change. In 2003, we initiated a series of five studies to establish a scientific rationale for interventions to translate research findings into clinical practice by exploring the performance of a number of different, commonly used, overlapping behavioral theories and models. We reflect on the strengths and weaknesses of the methods, the performance of the theories, and consider where these methods sit alongside the range of methods for studying healthcare professional behavior change. Methods These were five studies of the theory-based cognitions and clinical behaviors (taking dental radiographs, performing dental restorations, placing fissure sealants, managing upper respiratory tract infections without prescribing antibiotics, managing low back pain without ordering lumbar spine x-rays of random samples of primary care dentists and physicians. Measures were derived for the explanatory theoretical constructs in the Theory of Planned Behavior (TPB, Social Cognitive Theory (SCT, and Illness Representations specified by the Common Sense Self Regulation Model (CSSRM. We constructed self-report measures of two constructs from Learning Theory (LT, a measure of Implementation Intentions (II, and the Precaution Adoption Process. We collected data on theory-based cognitions (explanatory measures and two interim outcome measures (stated behavioral intention and simulated behavior by postal questionnaire survey during the 12-month period to which objective measures of behavior (collected from routine administrative sources were related. Planned analyses explored the predictive value of theories in explaining variance in intention, behavioral simulation and behavior. Results Response rates across the five surveys ranged from 21% to 48%; we achieved the target sample size for three of
Pezzotti, Giuseppe; Zhu, Wenliang; Boffelli, Marco; Adachi, Tetsuya; Ichioka, Hiroaki; Yamamoto, Toshiro; Marunaka, Yoshinori; Kanamura, Narisato
2015-05-01
The Raman spectroscopic method has quantitatively been applied to the analysis of local crystallographic orientation in both single-crystal hydroxyapatite and human teeth. Raman selection rules for all the vibrational modes of the hexagonal structure were expanded into explicit functions of Euler angles in space and six Raman tensor elements (RTE). A theoretical treatment has also been put forward according to the orientation distribution function (ODF) formalism, which allows one to resolve the statistical orientation patterns of the nm-sized hydroxyapatite crystallite comprised in the Raman microprobe. Close-form solutions could be obtained for the Euler angles and their statistical distributions resolved with respect to the direction of the average texture axis. Polarized Raman spectra from single-crystalline hydroxyapatite and textured polycrystalline (teeth enamel) samples were compared, and a validation of the proposed Raman method could be obtained through confirming the agreement between RTE values obtained from different samples.
Quantitative Analysis of Hohlraum Energetics Modeling
Patel, Mehul V.; Mauche, Christopher W.; Jones, Odgen S.; Scott, Howard A.
2016-10-01
New 1D/2D hohlraum models have been developed to enable quantitative studies of ICF hohlraum energetics. The models employ sufficient numerical resolution (spatial, temporal discetization, radiation energy groups, laser rays, IMC photons) to satisfy a priori convergence criteria on the observables to be compared. For example, we aim for numerical errors of less than 5% in the predicted X-ray flux. Post shot simulations using the new models provide quantitative assessments of the accuracy of energetics modeling across a range of ICF platforms. The models have also been used to reexamine physics sensitivities in the modeling of the NLTE wall plasma. This work is guiding improvements in the underlying DCA atomic physics models and the radiation hydrodynamics code (HYDRA). Prepared by LLNL under Contract DE-AC52-07NA27344.
Computational Graph Theoretical Model of the Zebrafish Sensorimotor Pathway
Peterson, Joshua M.; Stobb, Michael; Mazzag, Bori; Gahtan, Ethan
2011-11-01
Mapping the detailed connectivity patterns of neural circuits is a central goal of neuroscience and has been the focus of extensive current research [4, 3]. The best quantitative approach to analyze the acquired data is still unclear but graph theory has been used with success [3, 1]. We present a graph theoretical model with vertices and edges representing neurons and synaptic connections, respectively. Our system is the zebrafish posterior lateral line sensorimotor pathway. The goal of our analysis is to elucidate mechanisms of information processing in this neural pathway by comparing the mathematical properties of its graph to those of other, previously described graphs. We create a zebrafish model based on currently known anatomical data. The degree distributions and small-world measures of this model is compared to small-world, random and 3-compartment random graphs of the same size (with over 2500 nodes and 160,000 connections). We find that the zebrafish graph shows small-worldness similar to other neural networks and does not have a scale-free distribution of connections.
Quantitative Modeling of Earth Surface Processes
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes. More details...
Quantitative system validation in model driven design
DEFF Research Database (Denmark)
Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois
2010-01-01
The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made, focus...
POSITIVE LEADERSHIP MODELS: THEORETICAL FRAMEWORK AND RESEARCH
Directory of Open Access Journals (Sweden)
Javier Blanch, Francisco Gil
2016-09-01
Full Text Available The objective of this article is twofold; firstly, we establish the theoretical boundaries of positive leadership and the reasons for its emergence. It is related to the new paradigm of positive psychology that has recently been shaping the scope of organizational knowledge. This conceptual framework has triggered the development of the various forms of positive leadership (i.e. transformational, servant, spiritual, authentic, and positive. Although the construct does not seem univocally defined, these different types of leadership overlap and share a significant affinity. Secondly, we review the empirical evidence that shows the impact of positive leadership in organizations and we highlight the positive relationship between these forms of leadership and key positive organizational variables. Lastly, we analyse future research areas in order to further develop this concept.
Empathy and Child Neglect: A Theoretical Model
De Paul, Joaquin; Guibert, Maria
2008-01-01
Objective: To present an explanatory theory-based model of child neglect. This model does not address neglectful behaviors of parents with mental retardation, alcohol or drug abuse, or severe mental health problems. In this model parental behavior aimed to satisfy a child's need is considered a helping behavior and, as a consequence, child neglect…
Recent trends in social systems quantitative theories and quantitative models
Hošková-Mayerová, Šárka; Soitu, Daniela-Tatiana; Kacprzyk, Janusz
2017-01-01
The papers collected in this volume focus on new perspectives on individuals, society, and science, specifically in the field of socio-economic systems. The book is the result of a scientific collaboration among experts from “Alexandru Ioan Cuza” University of Iaşi (Romania), “G. d’Annunzio” University of Chieti-Pescara (Italy), "University of Defence" of Brno (Czech Republic), and "Pablo de Olavide" University of Sevilla (Spain). The heterogeneity of the contributions presented in this volume reflects the variety and complexity of social phenomena. The book is divided in four Sections as follows. The first Section deals with recent trends in social decisions. Specifically, it aims to understand which are the driving forces of social decisions. The second Section focuses on the social and public sphere. Indeed, it is oriented on recent developments in social systems and control. Trends in quantitative theories and models are described in Section 3, where many new formal, mathematical-statistical to...
K. Sridhar Moorthy's Theoretical Modelling in Marketing - A Review ...
African Journals Online (AJOL)
Modelling has become a visible tool in many disciplines including marketing and several marketing models have been constructed. These models serve their pedagogical and practical purposes in some cases. However, among the marketing models so often cited is Moorthy‟s Theoretical Modelling in Marketing.
Dodd, Bucky J.
2013-01-01
Online course design is an emerging practice in higher education, yet few theoretical models currently exist to explain or predict how the diffusion of innovations occurs in this space. This study used a descriptive, quantitative survey research design to examine theoretical relationships between decision-making style and resistance to change…
Modeling Organizational Design - Applying A Formalism Model From Theoretical Physics
Directory of Open Access Journals (Sweden)
Robert Fabac
2008-06-01
Full Text Available Modern organizations are exposed to diverse external environment influences. Currently accepted concepts of organizational design take into account structure, its interaction with strategy, processes, people, etc. Organization design and planning aims to align this key organizational design variables. At the higher conceptual level, however, completely satisfactory formulation for this alignment doesn’t exist. We develop an approach originating from the application of concepts of theoretical physics to social systems. Under this approach, the allocation of organizational resources is analyzed in terms of social entropy, social free energy and social temperature. This allows us to formalize the dynamic relationship between organizational design variables. In this paper we relate this model to Galbraith's Star Model and we also suggest improvements in the procedure of the complex analytical method in organizational design.
Physically correct theoretical prism waveguide coupler model.
Liu, Tao; Samuels, Robert J
2004-07-01
We develop new generalized four-wave-model-based waveguide mode equations for both isotropic and anisotropic systems by taking into account the influence of the incident light. These new mode equations eliminate the inherent deficiency in the conventional waveguide model, in which the action of incident light was neglected. Further, a peak-value-search (PVS) numerical method is developed to solve the four-wave-model-based mode equations. The PVS method has significant advantages in that accurate refractive index and thickness can be obtained without prior knowledge of the thickness of the air gap.
Theoretical Modelling of Intercultural Communication Process
Mariia Soter
2016-01-01
The definition of the concepts of “communication”, “intercultural communication”, “model of communication” are analyzed in the article. The basic components of the communication process are singled out. The model of intercultural communication is developed. Communicative, behavioral and complex skills for optimal organization of intercultural communication, establishment of productive contact with a foreign partner to achieve mutual understanding, searching for acceptable ways of organizing i...
Quantitative magnetospheric models derived from spacecraft magnetometer data
Mead, G. D.; Fairfield, D. H.
1973-01-01
Quantitative models of the external magnetospheric field were derived by making least-squares fits to magnetic field measurements from four IMP satellites. The data were fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the models contain the effects of seasonal north-south asymmetries. The expansions are divergence-free, but unlike the usual scalar potential expansions, the models contain a nonzero curl representing currents distributed within the magnetosphere. Characteristics of four models are presented, representing different degrees of magnetic disturbance as determined by the range of Kp values. The latitude at the earth separating open polar cap field lines from field lines closing on the dayside is about 5 deg lower than that determined by previous theoretically-derived models. At times of high Kp, additional high latitude field lines are drawn back into the tail.
Theoretical Modelling of Intercultural Communication Process
Directory of Open Access Journals (Sweden)
Mariia Soter
2016-08-01
Full Text Available The definition of the concepts of “communication”, “intercultural communication”, “model of communication” are analyzed in the article. The basic components of the communication process are singled out. The model of intercultural communication is developed. Communicative, behavioral and complex skills for optimal organization of intercultural communication, establishment of productive contact with a foreign partner to achieve mutual understanding, searching for acceptable ways of organizing interaction and cooperation for both communicants are highlighted in the article. It is noted that intercultural communication through interaction between people affects the development of different cultures’ aspects.
SOME THEORETICAL MODELS EXPLAINING ADVERTISING EFFECTS
Directory of Open Access Journals (Sweden)
Vasilica Magdalena SOMEŞFĂLEAN
2014-06-01
Full Text Available Persuade clients is still the main focus of the companies, using a set of methods and techniques designed to influence their behavior, in order to obtain better results (profits over a longer period of time. Since the late nineteenth - early twentieth century, the american E.St.Elmo Lewis, considered a pioneer in advertising and sales, developed the first theory, AIDA model, later used by marketers and advertisers to develop a marketing communications strategy. Later studies have developed other models that are the main subject of this research, which explains how and why persuasive communication works, to understand why some approaches are effective and others are not.
Tesla Coil Theoretical Model and its Experimental Verification
National Research Council Canada - National Science Library
Janis Voitkans; Arnis Voitkans
2014-01-01
In this paper a theoretical model of Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wire form, where the line voltage is measured across electrically neutral space...
Theoretical Tinnitus framework: A Neurofunctional Model
Directory of Open Access Journals (Sweden)
Iman Ghodratitoostani
2016-08-01
Full Text Available Subjective tinnitus is the conscious (attended awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional tinnitus model to indicate that the conscious perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional tinnitus model includes the peripheral auditory system, the thalamus, the limbic system, brain stem, basal ganglia, striatum and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the sourceless sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general
From theoretical model to practical use:
DEFF Research Database (Denmark)
Bjørk, Ida Torunn; Lomborg, Kirsten; Nielsen, Carsten Munch
2013-01-01
that is enhanced when appropriate support is given by leaders in the involved facilities. Conclusion. Knowledge translation is a time-consuming and collaborative endeavour. On the basis of our experience we advocate the implementation and use of a conceptual framework for the entire process of knowledge......Aim. To present a case of knowledge translation in nursing education and practice and discusses mechanisms relevant to bringing knowledge into action. Background. The process of knowledge translation aspires to close the gap between theory and practice. Knowledge translation is a cyclic process...... involving both the creation and application of knowledge in several phases. The case presented in this paper is the translation of the Model of Practical Skill Performance into education and practice. Advantages and problems with the use of this model and its adaptation and tailoring to local contexts...
Linden, van der A.; Oosting, S.J.; Ven, van de G.W.J.; Boer, de I.J.M.; Ittersum, van M.K.
2015-01-01
n crop science, widely used theoretical concepts of production ecology comprise a hierarchy in growth defining, limiting, and reducing factors, which determine corresponding potential, limited, and actual production levels. These concepts give insight in theoretically achievable production, yield
Dynamics in Higher Education Politics: A Theoretical Model
Kauko, Jaakko
2013-01-01
This article presents a model for analysing dynamics in higher education politics (DHEP). Theoretically the model draws on the conceptual history of political contingency, agenda-setting theories and previous research on higher education dynamics. According to the model, socio-historical complexity can best be analysed along two dimensions: the…
Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.
Lee, Won Hee; Bullmore, Ed; Frangou, Sophia
2017-02-01
There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
College Students Solving Chemistry Problems: A Theoretical Model of Expertise
Taasoobshirazi, Gita; Glynn, Shawn M.
2009-01-01
A model of expertise in chemistry problem solving was tested on undergraduate science majors enrolled in a chemistry course. The model was based on Anderson's "Adaptive Control of Thought-Rational" (ACT-R) theory. The model shows how conceptualization, self-efficacy, and strategy interact and contribute to the successful solution of quantitative,…
Lerner, Eitan; Ploetz, Evelyn; Hohlbein, Johannes; Cordes, Thorben; Weiss, Shimon
2016-07-07
Single-molecule, protein-induced fluorescence enhancement (PIFE) serves as a molecular ruler at molecular distances inaccessible to other spectroscopic rulers such as Förster-type resonance energy transfer (FRET) or photoinduced electron transfer. In order to provide two simultaneous measurements of two distances on different molecular length scales for the analysis of macromolecular complexes, we and others recently combined measurements of PIFE and FRET (PIFE-FRET) on the single molecule level. PIFE relies on steric hindrance of the fluorophore Cy3, which is covalently attached to a biomolecule of interest, to rotate out of an excited-state trans isomer to the cis isomer through a 90° intermediate. In this work, we provide a theoretical framework that accounts for relevant photophysical and kinetic parameters of PIFE-FRET, show how this framework allows the extraction of the fold-decrease in isomerization mobility from experimental data, and show how these results provide information on changes in the accessible volume of Cy3. The utility of this model is then demonstrated for experimental results on PIFE-FRET measurement of different protein-DNA interactions. The proposed model and extracted parameters could serve as a benchmark to allow quantitative comparison of PIFE effects in different biological systems.
2016-01-01
Single-molecule, protein-induced fluorescence enhancement (PIFE) serves as a molecular ruler at molecular distances inaccessible to other spectroscopic rulers such as Förster-type resonance energy transfer (FRET) or photoinduced electron transfer. In order to provide two simultaneous measurements of two distances on different molecular length scales for the analysis of macromolecular complexes, we and others recently combined measurements of PIFE and FRET (PIFE-FRET) on the single molecule level. PIFE relies on steric hindrance of the fluorophore Cy3, which is covalently attached to a biomolecule of interest, to rotate out of an excited-state trans isomer to the cis isomer through a 90° intermediate. In this work, we provide a theoretical framework that accounts for relevant photophysical and kinetic parameters of PIFE-FRET, show how this framework allows the extraction of the fold-decrease in isomerization mobility from experimental data, and show how these results provide information on changes in the accessible volume of Cy3. The utility of this model is then demonstrated for experimental results on PIFE-FRET measurement of different protein–DNA interactions. The proposed model and extracted parameters could serve as a benchmark to allow quantitative comparison of PIFE effects in different biological systems. PMID:27184889
Modeling theoretical uncertainties in phenomenological analyses for particle physics
Energy Technology Data Exchange (ETDEWEB)
Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)
2017-04-15
The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)
Circumplex model of marital and family systems: VI. Theoretical update.
Olson, D H; Russell, C S; Sprenkle, D H
1983-03-01
This paper updates the theoretical work on the Circumplex Model and provides revised and new hypotheses. Similarities and contrasts to the Beavers Systems Model are made along with comments regarding Beavers and Voeller's critique. FACES II, a newly revised assessment tool, provides both "perceived" and "ideal" family assessment that is useful empirically and clinically.
A theoretical model for predicting neutron fluxes for cyclic Neutron ...
African Journals Online (AJOL)
A theoretical model has been developed for prediction of thermal neutron fluxes required for cyclic irradiations of a sample to obtain the same activity previously used for the detection of any radionuclide of interest. The model is suitable for radiotracer production or for long-lived neutron activation products where the ...
Expert judgement models in quantitative risk assessment
Energy Technology Data Exchange (ETDEWEB)
Rosqvist, T. [VTT Automation, Helsinki (Finland); Tuominen, R. [VTT Automation, Tampere (Finland)
1999-12-01
Expert judgement is a valuable source of information in risk management. Especially, risk-based decision making relies significantly on quantitative risk assessment, which requires numerical data describing the initiator event frequencies and conditional probabilities in the risk model. This data is seldom found in databases and has to be elicited from qualified experts. In this report, we discuss some modelling approaches to expert judgement in risk modelling. A classical and a Bayesian expert model is presented and applied to real case expert judgement data. The cornerstone in the models is the log-normal distribution, which is argued to be a satisfactory choice for modelling degree-of-belief type probability distributions with respect to the unknown parameters in a risk model. Expert judgements are qualified according to bias, dispersion, and dependency, which are treated differently in the classical and Bayesian approaches. The differences are pointed out and related to the application task. Differences in the results obtained from the different approaches, as applied to real case expert judgement data, are discussed. Also, the role of a degree-of-belief type probability in risk decision making is discussed.
Energy Technology Data Exchange (ETDEWEB)
Oprisan, Sorinel Adrian [Department of Psychology, University of New Orleans, New Orleans, LA (United States)]. E-mail: soprisan@uno.edu
2001-11-30
There has been increased theoretical and experimental research interest in autonomous mobile robots exhibiting cooperative behaviour. This paper provides consistent quantitative measures of organizational degree of a two-dimensional environment. We proved, by the way of numerical simulations, that the theoretically derived values of the feature are reliable measures of aggregation degree. The slope of the feature's dependence on memory radius leads to an optimization criterion for stochastic functional self-organization. We also described the intellectual heritages that have guided our research, as well as possible future developments. (author)
Fiberglass-reinforced glulam beams: mechanical properties and theoretical model
Directory of Open Access Journals (Sweden)
Juliano Fiorelli
2006-09-01
Full Text Available The glued-laminated lumber (glulam technique is an efficient process for making rational use of wood. Fiber-Reinforced Polymers (FRPs associated with glulam beams provide significant gains in terms of strength and stiffness, and also alter the mode of rupture of these structural elements. In this context, this paper presents a theoretical model for designing reinforced glulam beams. The model allows for the calculation of the bending moment, the hypothetical distribution of linear strains along the height of the beam, and considers the wood has a linear elastic fragile behavior in tension parallel to the fibers and bilinear in compression parallel to the fibers, initially elastic and subsequently inelastic, with a negative decline in the stress-strain diagram. The stiffness was calculated by the transformed section method. Twelve non-reinforced and fiberglass reinforced glulam beams were evaluated experimentally to validate the proposed theoretical model. The results obtained indicate good congruence between the experimental and theoretical values.
Global quantitative modeling of chromatin factor interactions.
Directory of Open Access Journals (Sweden)
Jian Zhou
2014-03-01
Full Text Available Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the "chromatin codes" remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles--we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions.
Global Quantitative Modeling of Chromatin Factor Interactions
Zhou, Jian; Troyanskaya, Olga G.
2014-01-01
Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896
Theoretical models in the development of advertising for food products
DEFF Research Database (Denmark)
Bech-Larsen, Tino; Stacey, Julia
2005-01-01
Many advertising people believe that theoretical models hamper creativity and efficiency in the development of advertising messages. This need not be the case, if the theoretical models are sufficiently flexible and intelligible. On the contrary, a workable model that describes how...... the advertisement influences the target may serve as creative inspiration and as a common frame of reference for those involved in the development of advertisements. The means-end-chain model says that an advertisement is effective by connecting the product's attributes (means) and the target's personal values...... (ends). MAPP's experience with the model indicates that it is both flexible and intuitively accessible to practicians. In order to study this MAPP and the Danish Fruit Growers' Association asked an advertising agency to produce two campaign proposals with the purpose of getting young people to eat more...
Organizational Resilience: The Theoretical Model and Research Implication
Directory of Open Access Journals (Sweden)
Xiao Lei
2017-01-01
Full Text Available Organizations are all subject to a diverse and ever changing and uncertain environment. Under this situation organizations should develop a capability which can resist the emergency and recover from the disruption. Base on lot of literature, the paper provides the main concept of organizational resilience; construct the primary theoretical model and some implications for management.
Organizational Learning and Product Design Management: Towards a Theoretical Model.
Chiva-Gomez, Ricardo; Camison-Zornoza, Cesar; Lapiedra-Alcami, Rafael
2003-01-01
Case studies of four Spanish ceramics companies were used to construct a theoretical model of 14 factors essential to organizational learning. One set of factors is related to the conceptual-analytical phase of the product design process and the other to the creative-technical phase. All factors contributed to efficient product design management…
Workshop IV – Cosmology-theoretical models/alternative scenarios ...
Indian Academy of Sciences (India)
Workshop IV – Cosmology-theoretical models/alternative scenarios: A report. ASIT BANERJEE. ½ and REZA TAVAKOL. 2. ½. Department of Physics, Jadavpur University, Calcutta 700 032, India. ¾. Astronomy Unit, School of Mathematical Sciences, Queen Mary and Westfield College, Mile End. Road, London E1 4NS, UK.
Theoretical model analysis of molecular orientations in liquid protein ...
African Journals Online (AJOL)
In this study, some theoretical model functions have been used to explain the molecular behaviour of four different types of proteins; human haemoglobin, Insulin, egg-white lysozyme and β - globulin molecules in solution. The results of the computational fitting procedures showed that the dielectric dispersion of the protein ...
Healing from Childhood Sexual Abuse: A Theoretical Model
Draucker, Claire Burke; Martsolf, Donna S.; Roller, Cynthia; Knapik, Gregory; Ross, Ratchneewan; Stidham, Andrea Warner
2011-01-01
Childhood sexual abuse is a prevalent social and health care problem. The processes by which individuals heal from childhood sexual abuse are not clearly understood. The purpose of this study was to develop a theoretical model to describe how adults heal from childhood sexual abuse. Community recruitment for an ongoing broader project on sexual…
Theoretical modeling and experimental analyses of laminated wood composite poles
Cheng Piao; Todd F. Shupe; Vijaya Gopu; Chung Y. Hse
2005-01-01
Wood laminated composite poles consist of trapezoid-shaped wood strips bonded with synthetic resin. The thick-walled hollow poles had adequate strength and stiffness properties and were a promising substitute for solid wood poles. It was necessary to develop theoretical models to facilitate the manufacture and future installation and maintenance of this novel...
Esposito, Alessandro
2006-01-01
This PhD project aims at the development and evaluation of microscopy techniques for the quantitative detection of molecular interactions and cellular features. The primarily investigated techniques are Fαrster Resonance Energy Transfer imaging and Fluorescence Lifetime Imaging Microscopy. These techniques have the capability to quantitatively probe the biochemical environment of fluorophores. An automated microscope capable of unsupervised operation has been developed that enables the invest...
Theoretical Model of Engagement in the Context of Brand Communities
Directory of Open Access Journals (Sweden)
Flávia D\\u2019albergaria Freitas
2017-01-01
Full Text Available This essay proposes to refine the concept of consumer engagement in the context of brand communities. A comprehensive review of studies addressing the phenomenon of brand community was made. This paper follows the tradition of Marketing Research and Consumer Behavior, more specifically the perspective of cognitive psychology. The main theoretical foundation of the study is the Social Identity Theory (SIT, also incorporating relevant contributions from the perspective of Consumer Culture Theory (CCT. Therefore, this study contributes to the progress of research on the phenomenon of engagement in brand communities, proposing a theoretical model that relates engagement with its antecedents factors and reflective dimensions.
Karlsson, E. B.; Hartmann, O.; Chatzidimitriou-Dreismann, C. A.; Abdul-Redah, T.
2016-08-01
No consensus has been reached so far about the hydrogen anomaly problem in Compton scattering of neutrons, although strongly reduced H cross-sections were first reported almost 20 years ago. Over the years, this phenomenon has been observed in many different hydrogen-containing materials. Here, we use yttrium hydrides as test objects, YH2, YH3, YD2 and YD3, Y(H x D1-x )2 and Y(H x D1-x )3, for which we observe H anomalies increasing with transferred momentum q. We also observe reduced deuteron cross-sections in YD2 and YD3 and have followed those up to scattering angles of 140° corresponding to high momentum transfers. In addition to data taken using the standard Au-197 foils for neutron energy selection, the present work includes experiments with Rh-103 foils and comparisons were also made with data from different detector setups. The H and D anomalies are discussed in terms of the different models proposed for their interpretation. The ‘electron loss model’ (which assumes energy transfer to excited electrons) is contradicted by the present data, but it is shown here that exchange effects in scattering from two or more protons (or deuterons) in the presence of large zero-point vibrations, can explain quantitatively the reduction of the cross-sections as well as their q-dependence. Decoherence processes also play an essential role. In a scattering time representation, shake-up processes can be followed on the attosecond scale. The theory also shows that large anomalies can appear only when the neutron coherence lengths (determined by energy selection and detector geometry) are about the same size as the distance between the scatterers.
Private and public incentive to reduce seasonality: A theoretical model
Cellini, Roberto; Rizzo, Giuseppe
2012-01-01
In this article, the authors present a theoretical model to investigate the private and social incentives to reduce seasonality in a given market. They assume that consumers derive different utilities from the consumption of the same good in different seasons. The seasonal product differentiation is modelled along the lines of Gabszewicz and Thisse (Price Competition, Quality and Income Disparities, 1979) and Shaked and Sutton (Relaxing Price Competition through Product Differentiation, 1982)...
Topos-theoretic Model of the Deutsch multiverse
Guts, Alexander K.
2002-01-01
The Deutsch multiverse is collection of parallel universes. In this article a formal theory and a topos-theoretic model of the Deutsch multiverse are given. For this the Lawvere-Kock Synthetic Differential Geometry and topos models for smooth infinitesimal analysis are used. Physical properties of multi-variant and many-dimensional parallel universes are discussed. Quantum fluctuations of universe geometry are considered. Photon ghosts in parallel universes are found.
New Theoretical Model of Nerve Conduction in Unmyelinated Nerves
Directory of Open Access Journals (Sweden)
Tetsuya Akaishi
2017-10-01
Full Text Available Nerve conduction in unmyelinated fibers has long been described based on the equivalent circuit model and cable theory. However, without the change in ionic concentration gradient across the membrane, there would be no generation or propagation of the action potential. Based on this concept, we employ a new conductive model focusing on the distribution of voltage-gated sodium ion channels and Coulomb force between electrolytes. Based on this new model, the propagation of the nerve conduction was suggested to take place far before the generation of action potential at each channel. We theoretically showed that propagation of action potential, which is enabled by the increasing Coulomb force produced by inflowing sodium ions, from one sodium ion channel to the next sodium channel would be inversely proportionate to the density of sodium channels on the axon membrane. Because the longitudinal number of sodium ion channel would be proportionate to the square root of channel density, the conduction velocity of unmyelinated nerves is theoretically shown to be proportionate to the square root of channel density. Also, from a viewpoint of equilibrium state of channel importation and degeneration, channel density was suggested to be proportionate to axonal diameter. Based on these simple basis, conduction velocity in unmyelinated nerves was theoretically shown to be proportionate to the square root of axonal diameter. This new model would also enable us to acquire more accurate and understandable vision on the phenomena in unmyelinated nerves in addition to the conventional electric circuit model and cable theory.
Modeling Organizational Design - Applying A Formalism Model From Theoretical Physics
Robert Fabac; Josip Stepanić
2008-01-01
Modern organizations are exposed to diverse external environment influences. Currently accepted concepts of organizational design take into account structure, its interaction with strategy, processes, people, etc. Organization design and planning aims to align this key organizational design variables. At the higher conceptual level, however, completely satisfactory formulation for this alignment doesn’t exist. We develop an approach originating from the application of concepts of theoretical ...
Esposito, Alessandro
2006-01-01
This PhD project aims at the development and evaluation of microscopy techniques for the quantitative detection of molecular interactions and cellular features. The primarily investigated techniques are Fαrster Resonance Energy Transfer imaging and Fluorescence Lifetime Imaging Microscopy. These
Yilmaz, Kaya
2013-01-01
There has been much discussion about quantitative and qualitative approaches to research in different disciplines. In the behavioural and social sciences, these two paradigms are compared to reveal their relative strengths and weaknesses. But the debate about both traditions has commonly taken place in academic books. It is hard to find an article…
Tan, Cheng Yong
2017-01-01
The present study reviewed quantitative empirical studies examining the relationship between cultural capital and student achievement. Results showed that researchers had conceptualized and measured cultural capital in different ways. It is argued that the more holistic understanding of the construct beyond highbrow cultural consumption must be…
Multiscale modeling of complex materials phenomenological, theoretical and computational aspects
Trovalusci, Patrizia
2014-01-01
The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.
Continuum damage modeling through theoretical and experimental pressure limit formulas
Directory of Open Access Journals (Sweden)
Fatima Majid
2018-01-01
Full Text Available In this paper, we developed a mathematical modeling to represent the damage of thermoplastic pipes. On the one hand, we adapted the theories of the rupture pressure to fit the High Density Polyethylene (HDPE case. Indeed, the theories for calculating the rupture pressure are multiple, designed originally for steels and alloys. For polymer materials, we have found that these theories can be adapted using a coefficient related to the nature of the studied material. The HDPE is characterized by two important values of pressure, deduced from the ductile form of the internal pressures evolution until burst. For this reason, we have designed an alpha coefficient taking into account these two pressures and giving a good approximation of the evolution of the experimental burst pressures through the theoretically corrected ones, using Faupel㒒s pressure formula. Then, we can deduce the evolution of the theoretical damage using the calculated pressures. On the other hand, two other mathematical models were undertaken. The first one has given rise to an adaptive model referring to an expression of the pressure as a function of the life fraction, the characteristic pressures and the critical life fraction. The second model represents a continuum damage model incorporating the pressure equations as a function of the life fraction and based on the burst pressure�s static damage model. These models represent important tools for industrials to assess the failure of thermoplastic pipes and proceed quick checks
Toward quantitative modeling of silicon phononic thermocrystals
Energy Technology Data Exchange (ETDEWEB)
Lacatena, V. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France); IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Haras, M.; Robillard, J.-F., E-mail: jean-francois.robillard@isen.iemn.univ-lille1.fr; Dubois, E. [IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Monfray, S.; Skotnicki, T. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France)
2015-03-16
The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.
Application of a theoretical model to evaluate COPD disease management.
Lemmens, Karin M M; Nieboer, Anna P; Rutten-Van Mölken, Maureen P M H; van Schayck, Constant P; Asin, Javier D; Dirven, Jos A M; Huijsman, Robbert
2010-03-26
Disease management programmes are heterogeneous in nature and often lack a theoretical basis. An evaluation model has been developed in which theoretically driven inquiries link disease management interventions to outcomes. The aim of this study is to methodically evaluate the impact of a disease management programme for patients with chronic obstructive pulmonary disease (COPD) on process, intermediate and final outcomes of care in a general practice setting. A quasi-experimental research was performed with 12-months follow-up of 189 COPD patients in primary care in the Netherlands. The programme included patient education, protocolised assessment and treatment of COPD, structural follow-up and coordination by practice nurses at 3, 6 and 12 months. Data on intermediate outcomes (knowledge, psychosocial mediators, self-efficacy and behaviour) and final outcomes (dyspnoea, quality of life, measured by the CRQ and CCQ, and patient experiences) were obtained from questionnaires and electronic registries. Implementation of the programme was associated with significant improvements in dyspnoea (p model showed associations between significantly improved intermediate outcomes and improvements in quality of life and dyspnoea. The application of a theory-driven model enhances the design and evaluation of disease management programmes aimed at improving health outcomes. This study supports the notion that a theoretical approach strengthens the evaluation designs of complex interventions. Moreover, it provides prudent evidence that the implementation of COPD disease management programmes can positively influence outcomes of care.
Esposito, Alessandro
2006-05-01
This PhD project aims at the development and evaluation of microscopy techniques for the quantitative detection of molecular interactions and cellular features. The primarily investigated techniques are Fαrster Resonance Energy Transfer imaging and Fluorescence Lifetime Imaging Microscopy. These techniques have the capability to quantitatively probe the biochemical environment of fluorophores. An automated microscope capable of unsupervised operation has been developed that enables the investigation of molecular and cellular properties at high throughput levels and the analysis of cellular heterogeneity. State-of-the-art Förster Resonance Energy Transfer imaging, Fluorescence Lifetime Imaging Microscopy, Confocal Laser Scanning Microscopy and the newly developed tools have been combined with cellular and molecular biology techniques for the investigation of protein-protein interactions, oligomerization and post-translational modifications of α-Synuclein and Tau, two proteins involved in Parkinson’s and Alzheimer’s disease, respectively. The high inter-disciplinarity of this project required the merging of the expertise of both the Molecular Biophysics Group at the Debye Institute - Utrecht University and the Cell Biophysics Group at the European Neuroscience Institute - Gαttingen University. This project was conducted also with the support and the collaboration of the Center for the Molecular Physiology of the Brain (Göttingen), particularly with the groups associated with the Molecular Quantitative Microscopy and Parkinson’s Disease and Aggregopathies areas. This work demonstrates that molecular and cellular quantitative microscopy can be used in combination with high-throughput screening as a powerful tool for the investigation of the molecular mechanisms of complex biological phenomena like those occurring in neurodegenerative diseases.
Hidden Markov Model for quantitative prediction of snowfall and ...
Indian Academy of Sciences (India)
Home; Journals; Journal of Earth System Science; Volume 126; Issue 3. Hidden Markov Model for quantitative ... A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in ...
Information Theoretic Tools for Parameter Fitting in Coarse Grained Models
Kalligiannaki, Evangelia
2015-01-07
We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.
Application of a theoretical model to evaluate COPD disease management
2010-01-01
Background Disease management programmes are heterogeneous in nature and often lack a theoretical basis. An evaluation model has been developed in which theoretically driven inquiries link disease management interventions to outcomes. The aim of this study is to methodically evaluate the impact of a disease management programme for patients with chronic obstructive pulmonary disease (COPD) on process, intermediate and final outcomes of care in a general practice setting. Methods A quasi-experimental research was performed with 12-months follow-up of 189 COPD patients in primary care in the Netherlands. The programme included patient education, protocolised assessment and treatment of COPD, structural follow-up and coordination by practice nurses at 3, 6 and 12 months. Data on intermediate outcomes (knowledge, psychosocial mediators, self-efficacy and behaviour) and final outcomes (dyspnoea, quality of life, measured by the CRQ and CCQ, and patient experiences) were obtained from questionnaires and electronic registries. Results Implementation of the programme was associated with significant improvements in dyspnoea (p theory-driven model enhances the design and evaluation of disease management programmes aimed at improving health outcomes. This study supports the notion that a theoretical approach strengthens the evaluation designs of complex interventions. Moreover, it provides prudent evidence that the implementation of COPD disease management programmes can positively influence outcomes of care. PMID:20346135
Modeling conflict : research methods, quantitative modeling, and lessons learned.
Energy Technology Data Exchange (ETDEWEB)
Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.
2004-09-01
This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.
A Modified Theoretical Model of Intrinsic Hardness of Crystalline Solids
Dai, Fu-Zhi; Zhou, Yanchun
2016-01-01
Super-hard materials have been extensively investigated due to their practical importance in numerous industrial applications. To stimulate the design and exploration of new super-hard materials, microscopic models that elucidate the fundamental factors controlling hardness are desirable. The present work modified the theoretical model of intrinsic hardness proposed by Gao. In the modification, we emphasize the critical role of appropriately decomposing a crystal to pseudo-binary crystals, which should be carried out based on the valence electron population of each bond. After modification, the model becomes self-consistent and predicts well the hardness values of many crystals, including crystals composed of complex chemical bonds. The modified model provides fundamental insights into the nature of hardness, which can facilitate the quest for intrinsic super-hard materials. PMID:27604165
Validation of theoretical models through measured pavement response
DEFF Research Database (Denmark)
Ullidtz, Per
1999-01-01
Most models for structural evaluation of pavements are of the analytical-empirical type. An analytical model, derived from solid mechanics, is used to calculate stresses or strains at critical positions, and these stresses or strains are then used with empirical relationships to predict pavement...... mechanics was quite different from the measured stress, the peak theoretical value being only half of the measured value.On an instrumented pavement structure in the Danish Road Testing Machine, deflections were measured at the surface of the pavement under FWD loading. Different analytical models were...... then used to derive the elastic parameters of the pavement layeres, that would produce deflections matching the measured deflections. Stresses and strains were then calculated at the position of the gauges and compared to the measured values. It was found that all analytical models would predict the tensile...
Comparing theoretical models of our galaxy with observations
Directory of Open Access Journals (Sweden)
Johnston K.V.
2012-02-01
Full Text Available With the advent of large scale observational surveys to map out the stars in our galaxy, there is a need for an efficient tool to compare theoretical models of our galaxy with observations. To this end, we describe here the code Galaxia, which uses efficient and fast algorithms for creating a synthetic survey of the Milky Way, and discuss its uses. Given one or more observational constraints like the color-magnitude bounds, a survey size and geometry, Galaxia returns a catalog of stars in accordance with a given theoretical model of the Milky Way. Both analytic and N-body models can be sampled by Galaxia. For N-body models, we present a scheme that disperses the stars spawned by an N-body particle, in such a way that the phase space density of the spawned stars is consistent with that of the N-body particles. The code is ideally suited to generating synthetic data sets that mimic near future wide area surveys such as GAIA, LSST and HERMES. In future, we plan to release the code publicly at http://galaxia.sourceforge.net. As an application of the code, we study the prospect of identifying structures in the stellar halo with future surveys that will have velocity information about the stars.
Directory of Open Access Journals (Sweden)
Aurelio José Figueredo
2014-03-01
Full Text Available Contents Meta-Analysis is a procedure designed to quantitatively analyze the methodological characteristics in studies sampled in conventional meta-analyses to assess the relationship between methodologies and outcomes. This article presents the rationale and procedures for conducting a Contents Meta-Analysis in conjunction with conventional Effects Meta-analysis. We provide an overview of the pertinent limitations of conventional meta-analysis from methodological and meta-scientific standpoint. We then introduce novel terminology distinguishing different kinds of complementary meta-analyses that address many of the problems previously identified for conventional meta-analyses. We would also like to direct readers to the second paper in this series (Figueredo, Black, & Scott, this issue, which demonstrates the utility of Contents Meta-Analysis with an empirical example and present findings regarding the generalizability of the effect sizes estimated. DOI: 10.2458/azu_jmmss.v4i2.17935
Game-Theoretic Models of Information Overload in Social Networks
Borgs, Christian; Chayes, Jennifer; Karrer, Brian; Meeder, Brendan; Ravi, R.; Reagans, Ray; Sayedi, Amin
We study the effect of information overload on user engagement in an asymmetric social network like Twitter. We introduce simple game-theoretic models that capture rate competition between celebrities producing updates in such networks where users non-strategically choose a subset of celebrities to follow based on the utility derived from high quality updates as well as disutility derived from having to wade through too many updates. Our two variants model the two behaviors of users dropping some potential connections (followership model) or leaving the network altogether (engagement model). We show that under a simple formulation of celebrity rate competition, there is no pure strategy Nash equilibrium under the first model. We then identify special cases in both models when pure rate equilibria exist for the celebrities: For the followership model, we show existence of a pure rate equilibrium when there is a global ranking of the celebrities in terms of the quality of their updates to users. This result also generalizes to the case when there is a partial order consistent with all the linear orders of the celebrities based on their qualities to the users. Furthermore, these equilibria can be computed in polynomial time. For the engagement model, pure rate equilibria exist when all users are interested in the same number of celebrities, or when they are interested in at most two. Finally, we also give a finite though inefficient procedure to determine if pure equilibria exist in the general case of the followership model.
Quantitative consensus of bioaccumulation models for integrated testing strategies.
Fernández, Alberto; Lombardo, Anna; Rallo, Robert; Roncaglioni, Alessandra; Giralt, Francesc; Benfenati, Emilio
2012-09-15
A quantitative consensus model based on bioconcentration factor (BCF) predictions obtained from five quantitative structure-activity relationship models was developed for bioaccumulation assessment as an integrated testing approach for waiving. Three categories were considered: non-bioaccumulative, bioaccumulative and very bioaccumulative. Five in silico BCF models were selected and included into a quantitative consensus model by means of the continuous formulation of Bayes' theorem. The discrete likelihoods commonly used in the qualitative Bayesian model were substituted by probability density functions to reduce the loss of information that occurred when continuous BCF values were distributed across the three bioaccumulation categories. Results showed that the continuous Bayesian model yielded the best classification predictions compared not only to the discrete Bayesian model, but also to the individual BCF models. The proposed quantitative consensus model proved to be a suitable approach for integrated testing strategies for continuous endpoints of environmental interest. Copyright © 2012 Elsevier Ltd. All rights reserved.
Quantitative modelling of the biomechanics of the avian syrinx
DEFF Research Database (Denmark)
Elemans, Coen P. H.; Larsen, Ole Næsbye; Hoffmann, Marc R.
2003-01-01
We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts...
Improving statistical reasoning theoretical models and practical implications
Sedlmeier, Peter
1999-01-01
This book focuses on how statistical reasoning works and on training programs that can exploit people''s natural cognitive capabilities to improve their statistical reasoning. Training programs that take into account findings from evolutionary psychology and instructional theory are shown to have substantially larger effects that are more stable over time than previous training regimens. The theoretical implications are traced in a neural network model of human performance on statistical reasoning problems. This book apppeals to judgment and decision making researchers and other cognitive scientists, as well as to teachers of statistics and probabilistic reasoning.
Tesla Coil Theoretical Model and its Experimental Verification
Voitkans Janis; Voitkans Arnis
2015-01-01
In this paper a theoretical model of Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wire form, where the line voltage is measured across electrically neutral space. By applying the principle of equivalence of single-wire and two-wire schemes an equivalent two-wire scheme can be found for a single-wire scheme and the already known long line theory can be applied to the Tesla coil. A new method of multiple reflections is develope...
An Emerging Theoretical Model of Music Therapy Student Development.
Dvorak, Abbey L; Hernandez-Ruiz, Eugenia; Jang, Sekyung; Kim, Borin; Joseph, Megan; Wells, Kori E
2017-07-01
Music therapy students negotiate a complex relationship with music and its use in clinical work throughout their education and training. This distinct, pervasive, and evolving relationship suggests a developmental process unique to music therapy. The purpose of this grounded theory study was to create a theoretical model of music therapy students' developmental process, beginning with a study within one large Midwestern university. Participants (N = 15) were music therapy students who completed one 60-minute intensive interview, followed by a 20-minute member check meeting. Recorded interviews were transcribed, analyzed, and coded using open and axial coding. The theoretical model that emerged was a six-step sequential developmental progression that included the following themes: (a) Personal Connection, (b) Turning Point, (c) Adjusting Relationship with Music, (d) Growth and Development, (e) Evolution, and (f) Empowerment. The first three steps are linear; development continues in a cyclical process among the last three steps. As the cycle continues, music therapy students continue to grow and develop their skills, leading to increased empowerment, and more specifically, increased self-efficacy and competence. Further exploration of the model is needed to inform educators' and other key stakeholders' understanding of student needs and concerns as they progress through music therapy degree programs.
A Theoretical Model for the Prediction of Siphon Breaking Phenomenon
Energy Technology Data Exchange (ETDEWEB)
Bae, Youngmin; Kim, Young-In; Seo, Jae-Kwang; Kim, Keung Koo; Yoon, Juhyeon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
A siphon phenomenon or siphoning often refers to the movement of liquid from a higher elevation to a lower one through a tube in an inverted U shape (whose top is typically located above the liquid surface) under the action of gravity, and has been used in a variety of reallife applications such as a toilet bowl and a Greedy cup. However, liquid drainage due to siphoning sometimes needs to be prevented. For example, a siphon breaker, which is designed to limit the siphon effect by allowing the gas entrainment into a siphon line, is installed in order to maintain the pool water level above the reactor core when a loss of coolant accident (LOCA) occurs in an open-pool type research reactor. In this paper, we develop a theoretical model to predict the siphon breaking phenomenon. In this paper, a theoretical model to predict the siphon breaking phenomenon is developed. It is shown that the present model predicts well the fundamental features of the siphon breaking phenomenon and undershooting height.
The theoretical model for the annular jet instability - Revisited
Lee, C. P.; Wang, T. G.
1989-01-01
The theoretical model of Lee and Wang (1986) for the instability of an annular jet, in which the jet's liquid layer is treated as a thin liquid sheet, is examined. It is suggested that the model should be altered so that when the envelope is closing its bottleneck during collapse, the new envelope experiences a sharp pressure pulse from its gaseous core, reversing the normal velocity of the sheet enough to maintain continuous constant gas flow. Using this improved version of the model, it is shown that if the liquid velocity is high enough and the gas velocity is greater than the liquid velocity, the bubble-formation frequency varies linearly with the difference between the two velocities, but not with their individual values.
Toward a theoretically based measurement model of the good life.
Cheung, C K
1997-06-01
A theoretically based conceptualization of the good life should differentiate 4 dimensions-the hedonist good life, the dialectical good life, the humanist good life, and the formalist good life. These 4 dimensions incorporate previous fragmentary measures, such as life satisfaction, depression, work alienation, and marital satisfaction, to produce an integrative view. In the present study, 276 Hong Kong Chinese husbands and wives responded to a survey of 13 indicators for these 4 good life dimensions. Confirmatory hierarchical factor analysis showed that these indicators identified the 4 dimensions of the good life, which in turn converged to identify a second-order factor of the overall good life. The model demonstrates discriminant validity in that the first-order factors had high loadings on the overall good life factor despite being linked by a social desirability factor. Analysis further showed that the second-order factor model applied equally well to husbands and wives. Thus, the conceptualization appears to be theoretically and empirically adequate in incorporating previous conceptualizations of the good life.
Tanabe, Minoru; Zama, Tatsuya; Shitomi, Hiroshi
2017-07-20
The spectral nonlinearity of an inverse-layer-type silicon (Si) photodiode (PD) in the visible region was investigated. As expected by theoretical calculation, supralinearity and saturation of the Si PD, which are key factors of nonlinearity, were suppressed by applying a reverse voltage above 30 V. Experimentally observed nonlinear behavior depending on the reverse bias was compared to a theoretical model describing supralinearity, including the inner parameters of the Si PD, and these comparison results were in agreement. This theoretical model enables us to quantitatively predict the behavior of the supralinearity of the inverse-layer-type Si PD in various reverse bias conditions. Accurate experimental nonlinearity supported by theoretical predictions will contribute to high-accuracy optical measurement with the Si PD over a wide range of optical power levels and various reverse-bias voltages.
A Theoretical Model for Meaning Construction through Constructivist Concept Learning
DEFF Research Database (Denmark)
Badie, Farshad
The central focus of this Ph.D. research is on ‘Logic and Cognition’ and, more specifically, this research covers the quintuple (Logic and Logical Philosophy, Philosophy of Education, Educational Psychology, Cognitive Science, Computer Science). The most significant contributions of this Ph.......D. dissertation are conceptual, logical, terminological, and semantic analysis of Constructivist Concept Learning (specifically, in the context of humans’ interactions with their environment and with other agents). This dissertation is concerned with the specification of the conceptualisation of the phenomena...... of ‘learning’, ‘mentoring’, and ‘knowledge’ within learning and knowledge acquisition systems. Constructivism as an epistemology and as a model of knowing and, respectively as a theoretical model of learning builds up the central framework of this research....
A theoretical model for calculation of formation force
Energy Technology Data Exchange (ETDEWEB)
Dai, D.L.; Gao, D.L. [China Univ. of Petroleum, Beijing (China); Pan, Q.F. [CNPC, Beijing (China). Greatwall Drilling Co.
2008-07-01
Estimation of the forces between the drill string and the formation is necessary for planning drilling trajectory in directional wells and for preventing deviation in straight holes. Many previously published models presented in the literature have assumed the bit equilibrium under constant inclination and azimuth angles and do not consider the effect of bit anisotropy and bit tilt angle. This paper presented a theoretical model of formation force which derived from three-dimensional rock-bit interaction model and weight on bit. The paper presented the simplified models and its applications. It also discussed the effect of bit anisotropy and bit tilt angle on formation forces. The model was utilized to estimate the anisotropic drilling characteristics of the formation in Kelasu conformation which had high stratigraphic dip in Tarim Basin in western China. It was concluded that the proposed model could properly reflect the strata drifting characteristic of the formation being evaluated. The formation force in this area tended to increase hole angle. It was therefore necessary to run the dropping assembly in-hole to resist the formation force when drilling a straight hole. 14 refs., 1 tab., 5 figs., 1 appendix.
Ambrus-Lakatos, Lorand; Vilagi, Balazs; Vincze, Janos
2004-01-01
It is frequently claimed that the expected yield on emerging market bonds commands a premium. Here we investigate the sources of this phe-nomenon. A stochastic general equilibrium model of a small open economy is analyzed numerically to derive conditions for interest rate premia. The novelty of our approach is to attack the problem form the point of view of state dependent policy mixes. The main lessons include: if positive premia were universal, then 1. nominal rigidity should be important, ...
Categorization and theoretical comparison of quantitative methods for assessing QT/RR hysteresis.
Gravel, Hugo; Curnier, Daniel; Dahdah, Nagib; Jacquemet, Vincent
2017-07-01
In the human electrocardiogram, there is a lag of adaptation of the QT interval to heart rate changes, usually termed QT/RR hysteresis (QT-hys). Subject-specific quantifiers of QT-hys have been proposed as potential biomarkers, but there is no consensus on the choice of the quantifier. A comprehensive literature search was conducted to identify original articles reporting quantifiers of repolarization hysteresis from the surface ECG in humans. Sixty articles fulfilled our inclusion criteria. Reported biomarkers were grouped under four categories. A simple mathematical model of QT/RR loop was used to illustrate differences between the methods. Category I quantifiers use direct measurement of QT time course of adaptation. They are limited to conditions where RR intervals are under strict control. Category IIa and IIb quantifiers compare QT responses during consecutive heart rate acceleration and deceleration. They are relevant when a QT/RR loop is observed, typically during exercise and recovery, but are not robust to protocol variations. Category III quantifiers evaluate the optimum RR memory in dynamic QT/RR relationship modeling. They estimate an intrinsic memory parameter independent from the nature of RR changes, but their reliability remains to be confirmed when multiple memory parameters are estimated. Promising approaches include the differentiation of short-term and long-term memory and adaptive estimation of memory parameters. Model-based approaches to QT-hys assessment appear to be the most versatile, as they allow separate quantification of QT/RR dependency and QT-hys, and can be applied to a wide range of experimental settings. © 2017 Wiley Periodicals, Inc.
Tesla Coil Theoretical Model and its Experimental Verification
Directory of Open Access Journals (Sweden)
Voitkans Janis
2014-12-01
Full Text Available In this paper a theoretical model of Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wire form, where the line voltage is measured across electrically neutral space. By applying the principle of equivalence of single-wire and two-wire schemes an equivalent two-wire scheme can be found for a single-wire scheme and the already known long line theory can be applied to the Tesla coil. A new method of multiple reflections is developed to characterize a signal in a long line. Formulas for calculation of voltage in Tesla coil by coordinate and calculation of resonance frequencies are proposed. The theoretical calculations are verified experimentally. Resonance frequencies of Tesla coil are measured and voltage standing wave characteristics are obtained for different output capacities in the single-wire mode. Wave resistance and phase coefficient of Tesla coil is obtained. Experimental measurements show good compliance with the proposed theory. The formulas obtained in this paper are also usable for a regular two-wire long line with distributed parameters.
Modeling of rolling element bearing mechanics. Theoretical manual
Merchant, David H.; Greenhill, Lyn M.
1994-10-01
This report documents the theoretical basis for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings; duplex angular contact ball bearings; and cylindrical roller bearings. The model includes the effects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program; and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. A companion report addresses the input instructions for and features of the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.
Theoretical temperature model with experimental validation for CLIC Accelerating Structures
AUTHOR|(CDS)2126138; Vamvakas, Alex; Alme, Johan
Micron level stability of the Compact Linear Collider (CLIC) components is one of the main requirements to meet the luminosity goal for the future $48 \\,km$ long underground linear accelerator. The radio frequency (RF) power used for beam acceleration causes heat generation within the aligned structures, resulting in mechanical movements and structural deformations. A dedicated control of the air- and water- cooling system in the tunnel is therefore crucial to improve alignment accuracy. This thesis investigates the thermo-mechanical behavior of the CLIC Accelerating Structure (AS). In CLIC, the AS must be aligned to a precision of $10\\,\\mu m$. The thesis shows that a relatively simple theoretical model can be used within reasonable accuracy to predict the temperature response of an AS as a function of the applied RF power. During failure scenarios or maintenance interventions, the RF power is turned off resulting in no heat dissipation and decrease in the overall temperature of the components. The theoretica...
A survey of game-theoretic models of cooperative advertising
DEFF Research Database (Denmark)
Jørgensen, Steffen; Zaccour, G.
2014-01-01
The paper surveys the literature on cooperative advertising in marketing channels (supply chains) using game theoretic methods. During the last decade, in particular, this literature has expanded considerably and has studied static as well as dynamic settings. The survey is divided into two main...... parts. The first one deals with simple marketing channels having one supplier and one reseller only. The second one covers marketing channels of a more complex structure, having more than one supplier and/or reseller. In the first part we find that a number of results carry over from static to dynamic...... problems of cooperative advertising also shows some similarities. The second part shows that models incorporating horizontal interaction on either or both layers of the supply chain are much less numerous than those supposing its absence. Participation rates in co-op advertising programs depend on inter...
Bhaskar, Ankush; Ramesh, Durbha Sai; Vichare, Geeta; Koganti, Triven; Gurubaran, S.
2017-12-01
Identification and quantification of possible drivers of recent global temperature variability remains a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases: CO2, CH4 and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance ( TSI) and cosmic ray flux ( CR); El Niño Southern Oscillation ( ENSO) and Global Mean Temperature Anomaly ( GMTA) made during 1984-2005 are utilized to distinguish driving and responding signals of global temperature variability. Estimates of their relative contributions reveal that CO2 ({˜ } 24 %), CH4 ({˜ } 19 %) and volcanic aerosols ({˜ }23 %) are the primary contributors to the observed variations in GMTA. While, UV ({˜ } 9 %) and ENSO ({˜ } 12 %) act as secondary drivers of variations in the GMTA, the remaining play a marginal role in the observed recent global temperature variability. Interestingly, ENSO and GMTA mutually drive each other at varied time lags. This study assists future modelling efforts in climate science.
Experimental and theoretical study of gust response for a wing store model with freeplay
Tang, Deman; Dowell, Earl H.
2006-08-01
An experimental delta wing/store model with freeplay in a periodic gust field has been designed and tested in the Duke wind tunnel. The wing structure is modeled theoretically using von Karman plate theory that accounts for geometric strain-displacement nonlinearities in the plate wing structure. A component modal analysis is used to derive the full structural equations of motion for the wing/store system. A 3-D time domain vortex lattice aerodynamic model including a reduced order model aerodynamic technique and a slender body aerodynamic theory for the store are also used to investigate the nonlinear aeroelastic system. The effects of the freeplay gap, the gust angle of attack and the initial conditions on the gust response are discussed. The quantitative correlations between the theory and experiment are reasonably good, but in the range of the dominant resonant frequency of this nonlinear system, i.e. at larger response amplitudes, the correlations are not good. The theoretical structural model needs to be improved to determine larger amplitude motions near the resonant frequency.
Logic Modeling in Quantitative Systems Pharmacology.
Traynard, Pauline; Tobalina, Luis; Eduati, Federica; Calzone, Laurence; Saez-Rodriguez, Julio
2017-08-01
Here we present logic modeling as an approach to understand deregulation of signal transduction in disease and to characterize a drug's mode of action. We discuss how to build a logic model from the literature and experimental data and how to analyze the resulting model to obtain insights of relevance for systems pharmacology. Our workflow uses the free tools OmniPath (network reconstruction from the literature), CellNOpt (model fit to experimental data), MaBoSS (model analysis), and Cytoscape (visualization). © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
A quantitative model for designing keyboard layout.
Shieh, K K; Lin, C C
1999-02-01
This study analyzed the quantitative relationship between keytapping times and ergonomic principles in typewriting skills. Keytapping times and key-operating characteristics of a female subject typing on the Qwerty and Dvorak keyboards for six weeks each were collected and analyzed. The results showed that characteristics of the typed material and the movements of hands and fingers were significantly related to keytapping times. The most significant factors affecting keytapping times were association frequency between letters, consecutive use of the same hand or finger, and the finger used. A regression equation for relating keytapping times to ergonomic principles was fitted to the data. Finally, a protocol for design of computerized keyboard layout based on the regression equation was proposed.
A Production Model for Construction: A Theoretical Framework
Directory of Open Access Journals (Sweden)
Ricardo Antunes
2015-03-01
Full Text Available The building construction industry faces challenges, such as increasing project complexity and scope requirements, but shorter deadlines. Additionally, economic uncertainty and rising business competition with a subsequent decrease in profit margins for the industry demands the development of new approaches to construction management. However, the building construction sector relies on practices based on intuition and experience, overlooking the dynamics of its production system. Furthermore, researchers maintain that the construction industry has no history of the application of mathematical approaches to model and manage production. Much work has been carried out on how manufacturing practices apply to construction projects, mostly lean principles. Nevertheless, there has been little research to understand the fundamental mechanisms of production in construction. This study develops an in-depth literature review to examine the existing knowledge about production models and their characteristics in order to establish a foundation for dynamic production systems management in construction. As a result, a theoretical framework is proposed, which will be instrumental in the future development of mathematical production models aimed at predicting the performance and behaviour of dynamic project-based systems in construction.
Directory of Open Access Journals (Sweden)
Laura Fanea
2012-01-01
Full Text Available Neurological disorders represent major causes of lost years of healthy life and mortality worldwide. Development of their quantitative interdisciplinary in vivo evaluation is required. Compartment modeling (CM of brain data acquired in vivo using magnetic resonance imaging techniques with clinically available contrast agents can be performed to quantitatively assess brain perfusion. Transport of 1H spins in water molecules across physiological compartmental brain barriers in three different pools was mathematically modeled and theoretically evaluated in this paper and the corresponding theoretical compartment modeling of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI data was analyzed. The pools considered were blood, tissue, and cerebrospinal fluid (CSF. The blood and CSF data were mathematically modeled assuming continuous flow of the 1H spins in these pools. Tissue data was modeled using three CMs. Results in this paper show that transport across physiological brain barriers such as the blood to brain barrier, the extracellular space to the intracellular space barrier, or the blood to CSF barrier can be evaluated quantitatively. Statistical evaluations of this quantitative information may be performed to assess tissue perfusion, barriers' integrity, and CSF flow in vivo in the normal or disease-affected brain or to assess response to therapy.
Schuwirth, Nele; Reichert, Peter
2013-02-01
For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.
The structure and dynamics of cities urban data analysis and theoretical modeling
Barthelemy, Marc
2016-01-01
With over half of the world's population now living in urban areas, the ability to model and understand the structure and dynamics of cities is becoming increasingly valuable. Combining new data with tools and concepts from statistical physics and urban economics, this book presents a modern and interdisciplinary perspective on cities and urban systems. Both empirical observations and theoretical approaches are critically reviewed, with particular emphasis placed on derivations of classical models and results, along with analysis of their limits and validity. Key aspects of cities are thoroughly analyzed, including mobility patterns, the impact of multimodality, the coupling between different transportation modes, the evolution of infrastructure networks, spatial and social organisation, and interactions between cities. Drawing upon knowledge and methods from areas of mathematics, physics, economics and geography, the resulting quantitative description of cities will be of interest to all those studying and r...
Hospital nurses' wellbeing at work: a theoretical model.
Utriainen, Kati; Ala-Mursula, Leena; Kyngäs, Helvi
2015-09-01
To develop a theoretical model of hospital nurses' wellbeing at work. The concept of wellbeing at work is presented without an exact definition and without considering different contents. A model was developed in a deductive manner and empirical data collected from nurses (n = 233) working in a university hospital. Explorative factor analysis was used. The main concepts were: patients' experience of high-quality care; assistance and support among nurses; nurses' togetherness and cooperation; fluent practical organisation of work; challenging and meaningful work; freedom to express diverse feelings in the work community; well-conducted everyday nursing; status related to the work itself; fair and supportive leadership; opportunities for professional development; fluent communication with other professionals; and being together with other nurses in an informal way. Themes included: collegial relationships; enhancing high-quality patient care; supportive and fair leadership; challenging, meaningful and well organised work; and opportunities for professional development. Object-dependent wellbeing was supported. Managers should focus on strengthening the positive aspect of wellbeing at work, focusing on providing fluently organised work practices, fair and supportive leadership and togetherness while allowing nurses to implement their own ideas and promote the experience of meaningfulness. © 2014 John Wiley & Sons Ltd.
Experimental Investigation and Theoretical Modeling of Nanosilica Activity in Concrete
Directory of Open Access Journals (Sweden)
Han-Seung Lee
2014-01-01
Full Text Available This paper presents experimental investigations and theoretical modeling of the hydration reaction of nanosilica blended concrete with different water-to-binder ratios and different nanosilica replacement ratios. The developments of chemically bound water contents, calcium hydroxide contents, and compressive strength of Portland cement control specimens and nanosilica blended specimens were measured at different ages: 1 day, 3 days, 7 days, 14 days, and 28 days. Due to the pozzolanic reaction of nanosilica, the contents of calcium hydroxide in nanosilica blended pastes are considerably lower than those in the control specimens. Compared with the control specimens, the extent of compressive strength enhancement in the nanosilica blended specimens is much higher at early ages. Additionally, a blended cement hydration model that considers both the hydration reaction of cement and the pozzolanic reaction of nanosilica is proposed. The properties of nanosilica blended concrete during hardening were evaluated using the degree of hydration of cement and the reaction degree of nanosilica. The calculated chemically bound water contents, calcium hydroxide contents, and compressive strength were generally consistent with the experimental results.
A theoretical model of unbalanced exchange flows through openings
Wise, Nicholas; Hunt, Gary
2017-11-01
Buoyancy-driven exchange flows through a single horizontal opening, for example through an opening at high level in a room containing warm air, are balanced, as there must be equal volume flux into and out of the opening. If a second, smaller, opening is introduced at low level in the room, air will enter through this opening. The volume flux out of the primary opening will therefore be larger than the volume flux in. This is an unbalanced exchange flow. A theoretical model to predict the volume flux of unbalanced buoyancy-driven exchange flows is developed. The model builds from a linear stability analysis for perturbations on a density interface, between buoyant and ambient fluid, advected out of the primary opening. Following this approach, we predict the criterion for the onset of bi-directional flow across circular openings as has been previously observed experimentally by others. The method developed is extended to non-circular geometries and comparisons are made between the volume fluxes predicted for circular and square openings. EPSRC.
A game theoretic model of drug launch in India.
Bhaduri, Saradindu; Ray, Amit Shovon
2006-01-01
There is a popular belief that drug launch is delayed in developing countries like India because of delayed transfer of technology due to a 'post-launch' imitation threat through weak intellectual property rights (IPR). In fact, this belief has been a major reason for the imposition of the Trade Related Intellectual Property Rights regime under the WTO. This construct undermines the fact that in countries like India, with high reverse engineering capabilities, imitation can occur even before the formal technology transfer, and fails to recognize the first mover advantage in pharmaceutical markets. This paper argues that the first mover advantage is important and will vary across therapeutic areas, especially in developing countries with diverse levels of patient enlightenment and quality awareness. We construct a game theoretic model of incomplete information to examine the delay in drug launch in terms of costs and benefits of first move, assumed to be primarily a function of the therapeutic area of the new drug. Our model shows that drug launch will be delayed only for external (infective/communicable) diseases, while drugs for internal, non-communicable diseases (accounting for the overwhelming majority of new drug discovery) will be launched without delay.
Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.
Johnson, Shane D; Groff, Elizabeth R
2014-07-01
The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.
Theoretical Modelling Methods for Thermal Management of Batteries
Directory of Open Access Journals (Sweden)
Bahman Shabani
2015-09-01
Full Text Available The main challenge associated with renewable energy generation is the intermittency of the renewable source of power. Because of this, back-up generation sources fuelled by fossil fuels are required. In stationary applications whether it is a back-up diesel generator or connection to the grid, these systems are yet to be truly emissions-free. One solution to the problem is the utilisation of electrochemical energy storage systems (ESS to store the excess renewable energy and then reusing this energy when the renewable energy source is insufficient to meet the demand. The performance of an ESS amongst other things is affected by the design, materials used and the operating temperature of the system. The operating temperature is critical since operating an ESS at low ambient temperatures affects its capacity and charge acceptance while operating the ESS at high ambient temperatures affects its lifetime and suggests safety risks. Safety risks are magnified in renewable energy storage applications given the scale of the ESS required to meet the energy demand. This necessity has propelled significant effort to model the thermal behaviour of ESS. Understanding and modelling the thermal behaviour of these systems is a crucial consideration before designing an efficient thermal management system that would operate safely and extend the lifetime of the ESS. This is vital in order to eliminate intermittency and add value to renewable sources of power. This paper concentrates on reviewing theoretical approaches used to simulate the operating temperatures of ESS and the subsequent endeavours of modelling thermal management systems for these systems. The intent of this review is to present some of the different methods of modelling the thermal behaviour of ESS highlighting the advantages and disadvantages of each approach.
A theoretical model for analysing gender bias in medicine
Directory of Open Access Journals (Sweden)
Johansson Eva E
2009-08-01
Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.
Quantitative Models and Analysis for Reactive Systems
DEFF Research Database (Denmark)
Thrane, Claus
, allowing verification procedures to quantify judgements, on how suitable a model is for a given specification — hence mitigating the usual harsh distinction between satisfactory and non-satisfactory system designs. This information, among other things, allows us to evaluate the robustness of our framework......, by studying how small changes to our models affect the verification results. A key source of motivation for this work can be found in The Embedded Systems Design Challenge [HS06] posed by Thomas A. Henzinger and Joseph Sifakis. It contains a call for advances in the state-of-the-art of systems verification...
Quantitative models for sustainable supply chain management
DEFF Research Database (Denmark)
Brandenburg, M.; Govindan, Kannan; Sarkis, J.
2014-01-01
and reverse logistics has been effectively reviewed in previously published research. This situation is in contrast to the understanding and review of mathematical models that focus on environmental or social factors in forward supply chains (SC), which has seen less investigation. To evaluate developments...
Empirical STORM-E Model. [I. Theoretical and Observational Basis
Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III
2013-01-01
Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented
Models in Educational Administration: Revisiting Willower's "Theoretically Oriented" Critique
Newton, Paul; Burgess, David; Burns, David P.
2010-01-01
Three decades ago, Willower (1975) argued that much of what we take to be theory in educational administration is in fact only theoretically oriented. If we accept Willower's assessment of the field as true, what implications does this statement hold for the academic study and practical application of the theoretically oriented aspects of our…
Theoretical Biology and Medical Modelling: ensuring continued growth and future leadership.
Nishiura, Hiroshi; Rietman, Edward A; Wu, Rongling
2013-07-11
Theoretical biology encompasses a broad range of biological disciplines ranging from mathematical biology and biomathematics to philosophy of biology. Adopting a broad definition of "biology", Theoretical Biology and Medical Modelling, an open access journal, considers original research studies that focus on theoretical ideas and models associated with developments in biology and medicine.
Martian weathering processes: Terrestrial analog and theoretical modeling studies
McAdam, Amy Catherine
2008-06-01
Understanding the role of water in the Martian near-surface, and its implications for possible habitable environments, is among the highest priorities of NASA's Mars Exploration Program. Characterization of alteration signatures in surface materials provides the best opportunity to assess the role of water on Mars. This dissertation investigates Martian alteration processes through analyses of Antarctic analogs and numerical modeling of mineral-fluid interactions. Analog work involved studying an Antarctic diabase, and associated soils, as Mars analogs to understand weathering processes in cold, dry environments. The soils are dominated by primary basaltic minerals, but also contain phyllosilicates, salts, iron oxides/oxyhydroxides, and zeolites. Soil clay minerals and zeolites, formed primarily during deuteric or hydrothermal alteration of the parent rock, were subsequently transferred to the soil by physical rock weathering. Authigenic soil iron oxides/oxyhydroxides and small amounts of poorly-ordered secondary silicates indicate some contributions from low-temperature aqueous weathering. Soil sulfates, which exhibit a sulfate- aerosol-derived mass-independent oxygen isotope signature, suggest contributions from acid aerosol-rock interactions. The complex alteration history of the Antarctic materials resulted in several similarities to Martian materials. The processes that affected the analogs, including deuteric/ hydrothermal clay formation, may be important in producing Martian surface materials. Theoretical modeling focused on investigating the alteration of Martian rocks under acidic conditions and using modeling results to interpret Martian observations. Kinetic modeling of the dissolution of plagioclase-pyroxene mineral mixtures under acidic conditions suggested that surfaces with high plagioclase/pyroxene, such as several northern regions, could have experienced some preferential dissolution of pyroxenes at a pH less than approximately 3-4. Modeling of the
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction
Cobbs, Gary
2012-01-01
Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most pote...
Bourget, Romain; Chaumont, Loïc; Durel, Charles-Eric; Sapoukhina, Natalia
2015-05-01
Quantitative plant disease resistance is believed to be more durable than qualitative resistance, since it exerts less selective pressure on the pathogens. However, the process of progressive pathogen adaptation to quantitative resistance is poorly understood, which makes it difficult to predict its durability or to derive principles for its sustainable deployment. Here, we study the dynamics of pathogen adaptation in response to quantitative plant resistance affecting pathogen reproduction rate and its colonizing capacity. We developed a stochastic model for the continuous evolution of a pathogen population within a quantitatively resistant host. We assumed that pathogen can adapt to a host by the progressive restoration of reproduction rate or of colonizing capacity, or of both. Our model suggests that a combination of quantitative trait loci (QTLs) affecting distinct pathogen traits was more durable if the evolution of repressed traits was antagonistic. Otherwise, quantitative resistance that depressed only pathogen reproduction was more durable. In order to decelerate the progressive pathogen adaptation, QTLs that decrease the pathogen's maximum capacity to colonize must be combined with QTLs that decrease the spore production per lesion or the infection efficiency or that increase the latent period. Our theoretical framework can help breeders to develop principles for sustainable deployment of QTLs. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Network-theoretic approach to model vortex interactions
Nair, Aditya; Taira, Kunihiko
2014-11-01
We present a network-theoretic approach to describe a system of point vortices in two-dimensional flow. By considering the point vortices as nodes, a complete graph is constructed with edges connecting each vortex to every other vortex. The interactions between the vortices are captured by the graph edge weights. We employ sparsification techniques on these graph representations based on spectral theory to construct sparsified models of the overall vortical interactions. The edge weights are redistributed through spectral sparsification of the graph such that the sum of the interactions associated with each vortex is maintained constant. In addition, sparse configurations maintain similar spectral properties as the original setup. Through the reduction in the number of interactions, key vortex interactions can be highlighted. Identification of vortex structures based on graph sparsification is demonstrated with an example of clusters of point vortices. We also evaluate the computational performance of sparsification for large collection of point vortices. Work supported by US Army Research Office (W911NF-14-1-0386) and US Air Force Office of Scientific Research (YIP: FA9550-13-1-0183).
Thermophotonic heat pump—a theoretical model and numerical simulations
Oksanen, Jani; Tulkki, Jukka
2010-05-01
We have recently proposed a solid state heat pump based on photon mediated heat transfer between two large-area light emitting diodes coupled by the electromagnetic field and enclosed in a semiconductor structure with a nearly homogeneous refractive index. Ideally the thermophotonic heat pump (THP) allows heat transfer at Carnot efficiency but in reality there are several factors that limit the efficiency. The efficient operation of the THP is based on the following construction factors and operational characteristics: (1) broad area semiconductor diodes to enable operation at optimal carrier density and high efficiency, (2) recycling of the energy of the emitted photons, (3) elimination of photon extraction losses by integrating the emitting and the absorbing diodes within a single semiconductor structure, and (4) eliminating the reverse thermal conduction by a nanometer scale vacuum layer between the diodes. In this paper we develop a theoretical model for the THP and study the fundamental physical limitations and potential of the concept. The results show that even when the most important losses of the THPs are accounted for, the THP has potential to outperform the thermoelectric coolers especially for heat transfer across large temperature differences and possibly even to compete with conventional small scale compressor based heat pumps.
A Game-Theoretic Model of Marketing Skin Whiteners.
Mendoza, Roger Lee
2015-01-01
Empirical studies consistently find that people in less developed countries tend to regard light or "white" skin, particularly among women, as more desirable or superior. This is a study about the marketing of skin whiteners in these countries, where over 80 percent of users are typically women. It proceeds from the following premises: a) Purely market or policy-oriented approaches toward the risks and harms of skin whitening are cost-inefficient; b) Psychosocial and informational factors breed uninformed and risky consumer choices that favor toxic skin whiteners; and c) Proliferation of toxic whiteners in a competitive buyer's market raises critical supplier accountability issues. Is intentional tort a rational outcome of uncooperative game equilibria? Can voluntary cooperation nonetheless evolve between buyers and sellers of skin whiteners? These twin questions are key to addressing the central paradox in this study: A robust and expanding buyer's market, where cheap whitening products abound at a high risk to personal and societal health and safety. Game-theoretic modeling of two-player and n-player strategic interactions is proposed in this study for both its explanatory and predictive value. Therein also lie its practical contributions to the economic literature on skin whitening.
Directory of Open Access Journals (Sweden)
Kahlil Baker
2017-09-01
Full Text Available Academic research on smallholders’ forestland-use decisions is regularly addressed in different streams of literature using different theoretical constructs that are independently incomplete. In this article, we propose a theoretical construct for modelling smallholders’ forestland-use decisions intended to serve in the guidance and operationalization of future models for quantitative analysis. Our construct is inspired by the sub-disciplines of forestry and agricultural economics with a crosscutting theme of how transaction costs drive separability between consumption and production decisions. Our results help explain why exogenous variables proposed in the existing literature are insufficient at explaining smallholders’ forestland-use decisions, and provide theoretical context for endogenizing characteristics of the household, farm and landscape. Smallholders’ forestland-use decisions are best understood in an agricultural context of competing uses for household assets and interdependent consumption and production decisions. Forest production strategies range from natural regeneration to intensive management of the forest resource to co-jointly produce market and non-market values. Due to transaction costs, decision prices are best represented by their shadow as opposed to market prices. Shadow prices are shaped by endogenous smallholder-specific preferences for leisure, non-market values, time, risk, and uncertainty. Our proposed construct is intended to provide a theoretical basis to assist modellers in the selection of variables for quantitative analysis.
Construction and Evaluation of the Theoretical Model of Citrus Cooperative Organization
Institute of Scientific and Technical Information of China (English)
2010-01-01
Based on the general overview of cooperative economic organizations of citrus industry at home and abroad,theoretical model of the modernization,industrialization and marketization of Citrus Cooperative Organization is established.After selecting the indices,such as the scale of production,the scale of management,the rate of encouraged farmers,and the market competitiveness,quantitative evaluation index system for modernization,industrialization and marketization is established.Then,Citrus Cooperative Organization is divided into three stages,such as primary stage,intermediate stage and senior stage.After evaluating the modernization,industrialization and marketization of citrus industry in the United States,Spain and South Africa,it is pointed out that the Citrus Cooperative Organization in China at present is at the primary stage.Finally,policy direction of the development of Citrus Cooperative Organization in China is pointed out.
Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation
Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina
2017-01-01
translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations. PMID:29326578
Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation
Directory of Open Access Journals (Sweden)
Elisa Magosso
2017-12-01
circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations.
Generalized PSF modeling for optimized quantitation in PET imaging
Ashrafinia, Saeed; Mohy-ud-Din, Hassan; Karakatsanis, Nicolas A.; Jha, Abhinav K.; Casey, Michael E.; Kadrmas, Dan J.; Rahmim, Arman
2017-06-01
Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF
Generalized PSF modeling for optimized quantitation in PET imaging.
Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman
2017-06-21
Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF
Refining the quantitative pathway of the Pathways to Mathematics model.
Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda
2015-03-01
In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. Copyright © 2014 Elsevier Inc. All rights reserved.
THEORETICAL MODELLING OF THERMAL CONDUCTIVITY OF DEEP EUTECTIC SOLVENT BASED NANOFLUID
Directory of Open Access Journals (Sweden)
OSAMA M.
2017-03-01
Full Text Available In this research, the thermal conductivities of graphene oxide nano-particles (GO dispersed in deep eutectic solvents (DESs composed of ethylene glycol (EG as a hydrogen bond donor (HBD and methyl tri-phenyl phosphonium bromide (MTPB as a salt, at weight fractions of 0.01%, 0.02% and 0.05%, were studied and quantitatively analysed. The molar ratios of DES (HBD:Salt used in this study are 3:1 and 5:1.The thermal conductivity data of the nano-fluid samples were measured at temperatures of 25-70 °C and the results were compared with theoretical models. Rashmi and Kumar’ models showed conflicting prediction performance. While, Rashmi’s model can predict thermal conductivity with error as low as 0.1%, Kumar’s model error varied from 3-55%. Thus, in this work, a simple empirical modification to Kumar’s model is presented which improves the predictions accuracy compared to that of Rashmi’s model.
Leader Attributions and Leader Behavior. First Stage Testing of Theoretical Model
1981-08-01
Technical Report 522 LEADER ATTRIBUTIONS AND LEADER BEHAVIOR: FIRST STAGE TESTING OF THEORETICAL MODEL i0Y c7) Terence R. Mitchell C111 University of...TTLE andSu~ito)S. TYPE OF REPORT A PERIOD COVERED LEADER ATTRIBUTIONS AND LEADER BEHAVIOR: Technical Report, FIRST STAGE TESTING OF THEORETICAL MODEL ...techniques to reduce these errors. vii.... _"r, me LEADER ATTRIBUTIONS AND LEADER BEHAVIOR: FIRST STAGE TESTING OF THEORETICAL MODEL CONTENTS Page
Experimental and theoretical study of magnetohydrodynamic ship models.
Directory of Open Access Journals (Sweden)
David Cébron
Full Text Available Magnetohydrodynamic (MHD ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques.
Experimental and theoretical study of magnetohydrodynamic ship models.
Cébron, David; Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe
2017-01-01
Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques.
K. Sridhar Moorthy's Theoretical Modelling in Marketing - A Review
African Journals Online (AJOL)
Toshiba
observable commissions are not needed since the effort can be observed or tractable. Even though this is a theoretical exposition, it does play out in real life. Many marketing firms in Nigeria today place their salespeople on commission because what the salespersons do in the field cannot be observed, even with the use of ...
Theoretical model of the single spin-echo relaxation time for spherical magnetic perturbers.
Kurz, Felix T; Kampf, Thomas; Heiland, Sabine; Bendszus, Martin; Schlemmer, Heinz-Peter; Ziener, Christian H
2014-05-01
Magnetically labeled cells and tissue iron deposits provide qualitative means to detect and monitor cardiovascular and cerebrovascular diseases with magnetic resonance imaging. However, to quantitatively examine the extent of pathological micromorphological changes, detailed knowledge about microstructural parameters and relaxation times is required. The complex geometrical arrangement of spherical magnetic perturbers is considered in an external magnetic field. They create a magnetic dipole field, whose corresponding spin-echo formation is investigated by analyzing the diffusion process in the dephasing volume. Quantitative predictions of the present analysis are compared with experimental data and empirical models. Single spin-echo relaxation times can be characterized by morphological parameters such as magnetic particle concentration and size as well as tissue diffusion coefficient and local magnetic susceptibility properties. As expected, no formation of a static dephasing plateau is observed in contrast to the gradient-echo relaxation time. Instead, the relaxation rate drops for large particle sizes and exhibits a prominent maximal value at intermediate sizes. These findings agree well with experimental data and previous theoretical results. Obtained results for the single spin-echo relaxation time allow to accurately quantify pathological processes in neurodegenerative disease and migration dynamics of magnetically labeled cells with the help of magnetic resonance imaging. Copyright © 2014 Wiley Periodicals, Inc.
Tachev, K D; Danov, K D; Kralchevsky, P A
2004-03-15
This study represents an attempt to achieve a better understanding of the stomatocyte-echinocyte transition in the shape of red blood cells. We determined experimentally the index of cell shape at various ionic strengths and osmolarities for native and trypsin-treated human erythrocytes. For every given composition of the outer phase, we calculated the ionic strength in the cells and the transmembrane electric potential using a known theoretical model. Next, we described theoretically the electric double layers formed on both sides of the cell membrane, and derived expressions for the tensions of the two membrane leaflets. Taking into account that the cell-shape index depends on the tension difference between the two leaflets, we fitted the experimental data with the constructed physicochemical model. The model, which agrees well with the experiment, indicates that the tension difference between the two leaflets is governed by the different adsorptions of counterions at the two membrane surfaces, rather than by the direct contribution of the electric double layers to the membrane tension. Thus, with the rise of the ionic strength, the counterion adsorption increases stronger at the outer leaflet, whose stretching surface pressure becomes greater, and whose area expands relative to that of the inner leaflet. Hence, there is no contradiction between the bilayer-couple hypothesis and the electric double layer theory, if the latter is upgraded to account for the effect of counterion-adsorption on the membrane tension. The developed quantitative model can be applied to predict the shape index of cells upon a stomatocyte-discocyte-echinocyte transformation at varying composition of the outer medium.
Theoretical model of intravascular paramagnetic tracers effect on tissue relaxation
DEFF Research Database (Denmark)
Kjølby, Birgitte Fuglsang; Østergaard, Leif; Kiselev, Valerij G
2006-01-01
with bulk blood. The enhancement of relaxation in tissue is due to the contrast in magnetic susceptibility between blood vessels and parenchyma induced by the presence of paramagnetic tracer. Beyond the perfusion measurements, the results can be applied to quantitation of functional MRI and to vessel size...
Determining Student Competency in Field Placements: An Emerging Theoretical Model
Directory of Open Access Journals (Sweden)
Twyla L. Salm
2016-06-01
Full Text Available This paper describes a qualitative case study that explores how twenty-three field advisors, representing three human service professions including education, nursing, and social work, experience the process of assessment with students who are struggling to meet minimum competencies in field placements. Five themes emerged from the analysis of qualitative interviews. The field advisors primary concern was the level of professional competency achieved by practicum students. Related to competency were themes concerned with the field advisor's role in being accountable and protecting the reputation of his/her profession as well as the reputation of the professional program affiliated with the practicum student's professional education. The final theme – teacher-student relationship –emerged from the data, both as a stand-alone and global or umbrella theme. As an umbrella theme, teacher-student relationship permeated each of the other themes as the participants interpreted their experiences of the process of assessment through the mentor relationships. A theoretical model was derived from these findings and the description of the model is presented. Cet article décrit une étude de cas qualitative qui explore comment vingt-trois conseillers de stages, représentant trois professions de services sociaux comprenant l’éducation, les soins infirmiers et le travail social, ont vécu l’expérience du processus d’évaluation avec des étudiants qui ont des difficultés à acquérir les compétences minimales durant les stages. Cinq thèmes ont été identifiés lors de l’analyse des entrevues qualitatives. La préoccupation principale des conseillers de stages était le niveau de compétence professionnelle acquis par les stagiaires. Les thèmes liés à la compétence étaient le rôle des conseillers de stages dans leur responsabilité pour protéger la réputation de leur profession ainsi que la réputation d’un programme professionnel
Advances in heterogeneous ice nucleation research: Theoretical modeling and measurements
Beydoun, Hassan
In the atmosphere, cloud droplets can remain in a supercooled liquid phase at temperatures as low as -40 °C. Above this temperature, cloud droplets freeze via heterogeneous ice nucleation whereby a rare and poorly understood subset of atmospheric particles catalyze the ice phase transition. As the phase state of clouds is critical in determining their radiative properties and lifetime, deficiencies in our understanding of heterogeneous ice nucleation poses a large uncertainty on our efforts to predict human induced global climate change. Experimental challenges in properly simulating particle-induced freezing processes under atmospherically relevant conditions have largely contributed to the absence of a well-established model and parameterizations that accurately predict heterogeneous ice nucleation. Conversely, the sparsity of reliable measurement techniques available struggle to be interpreted by a single consistent theoretical or empirical framework, which results in layers of uncertainty when attempting to extrapolate useful information regarding ice nucleation for use in atmospheric cloud models. In this dissertation a new framework for describing heterogeneous ice nucleation is developed. Starting from classical nucleation theory, the surface of an ice nucleating particle is treated as a continuum of heterogeneous ice nucleating activity and a particle specific distribution of this activity g is derived. It is hypothesized that an individual particle species exhibits a critical surface area. Above this critical area the ice nucleating activity of a particle species can be described by one g distribution, g, while below it g expresses itself expresses externally resulting in particle to particle variability in ice nucleating activity. The framework is supported by cold plate droplet freezing measurements for dust and biological particles in which the total surface area of particle material available is varied. Freezing spectra above a certain surface area
Rodríguez, J; Clemente, G; Sanjuán, N; Bon, J
2014-01-01
The drying kinetics of thyme was analyzed by considering different conditions: air temperature of between 40°C and 70°C , and air velocity of 1 m/s. A theoretical diffusion model and eight different empirical models were fitted to the experimental data. From the theoretical model application, the effective diffusivity per unit area of the thyme was estimated (between 3.68 × 10(-5) and 2.12 × 10 (-4) s(-1)). The temperature dependence of the effective diffusivity was described by the Arrhenius relationship with activation energy of 49.42 kJ/mol. Eight different empirical models were fitted to the experimental data. Additionally, the dependence of the parameters of each model on the drying temperature was determined, obtaining equations that allow estimating the evolution of the moisture content at any temperature in the established range. Furthermore, artificial neural networks were developed and compared with the theoretical and empirical models using the percentage of the relative errors and the explained variance. The artificial neural networks were found to be more accurate predictors of moisture evolution with VAR ≥ 99.3% and ER ≤ 8.7%.
Dziedziewicz, Dorota; Karwowski, Maciej
2015-01-01
This paper presents a new theoretical model of creative imagination and its applications in early education. The model sees creative imagination as composed of three inter-related components: vividness of images, their originality, and the level of transformation of imageries. We explore the theoretical and practical consequences of this new…
Zhumasheva, Anara; Zhumabaeva, Zaida; Sakenov, Janat; Vedilina, Yelena; Zhaxylykova, Nuriya; Sekenova, Balkumis
2016-01-01
The current study focuses on the research topic of creating a theoretical model of development of information competence among students enrolled in elective courses. In order to examine specific features of the theoretical model of development of information competence among students enrolled in elective courses, we performed an analysis of…
Semileptonic (Λ Λc eν) decay in a field theoretic quark model
Indian Academy of Sciences (India)
in the framework of a nonrelativistic field theoretic quark model where four component quark field operators along ... view of the earlier success of the present field-theoretic quark model [2,3], the semileptonic process Λb .... theless, lepton center of mass (CM) frame of reference is worth considering in this regard for the sake ...
Models and methods for quantitative analysis of surface-enhanced Raman spectra.
Li, Shuo; Nyagilo, James O; Dave, Digant P; Gao, Jean
2014-03-01
The quantitative analysis of surface-enhanced Raman spectra using scattering nanoparticles has shown the potential and promising applications in in vivo molecular imaging. The diverse approaches have been used for quantitative analysis of Raman pectra information, which can be categorized as direct classical least squares models, full spectrum multivariate calibration models, selected multivariate calibration models, and latent variable regression (LVR) models. However, the working principle of these methods in the Raman spectra application remains poorly understood and a clear picture of the overall performance of each model is missing. Based on the characteristics of the Raman spectra, in this paper, we first provide the theoretical foundation of the aforementioned commonly used models and show why the LVR models are more suitable for quantitative analysis of the Raman spectra. Then, we demonstrate the fundamental connections and differences between different LVR methods, such as principal component regression, reduced-rank regression, partial least square regression (PLSR), canonical correlation regression, and robust canonical analysis, by comparing their objective functions and constraints.We further prove that PLSR is literally a blend of multivariate calibration and feature extraction model that relates concentrations of nanotags to spectrum intensity. These features (a.k.a. latent variables) satisfy two purposes: the best representation of the predictor matrix and correlation with the response matrix. These illustrations give a new understanding of the traditional PLSR and explain why PLSR exceeds other methods in quantitative analysis of the Raman spectra problem. In the end, all the methods are tested on the Raman spectra datasets with different evaluation criteria to evaluate their performance.
Theoretical and practical aspects of modelling activated sludge processes
Meijer, S.C.F.
2004-01-01
This thesis describes the full-scale validation and calibration of a integrated metabolic activated sludge model for biological phosphorus removal. In chapters 1 and 2 the metabolic model is described, in chapters 3 to 6 the model is tested and in chapters 7 and 8 the model is put into practice.
Guidelines for a graph-theoretic implementation of structural equation modeling
Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William
2012-01-01
Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for
Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.
Moray, Neville; Groeger, John; Stanton, Neville
2017-02-01
This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.
Deferred Action: Theoretical model of process architecture design for emergent business processes
Patel, N.V.
2007-01-01
E-Business modelling and ebusiness systems development assumes fixed company resources, structures, and business processes. Empirical and theoretical evidence suggests that company resources and structures are emergent rather than fixed. Planning business activity in emergent contexts requires flexible ebusiness models based on better management theories and models . This paper builds and proposes a theoretical model of ebusiness systems capable of catering for emergent factors that affect bu...
[Nursing practice based on theoretical models: a qualitative study of nurses' perception].
Amaducci, Giovanna; Iemmi, Marina; Prandi, Marzia; Saffioti, Angelina; Carpanoni, Marika; Mecugni, Daniela
2013-01-01
Many faculty argue that theory and theorizing are closely related to the clinical practice, that the disciplinary knowledge grows, more relevantly, from the specific care context in which it takes place and, moreover, that knowledge does not proceed only by the application of general principles of the grand theories to specific cases. Every nurse, in fact, have a mental model, of what may or may not be aware, that motivate and substantiate every action and choice of career. The study describes what the nursing theoretical model is; the mental model and the tacit knowledge underlying it. It identifies the explicit theoretical model of the professional group that rapresents nursing partecipants, aspects of continuity with the theoretical model proposed by this degree course in Nursing.. Methods Four focus groups were made which were attended by a total of 22 nurses, rapresentatives of almost every Unit of Reggio Emilia Hospital's. We argue that the theoretical nursing model of each professional group is the result of tacit knowledge, which help to define the personal mental model, and the theoretical model, which explicitly underlying theoretical content learned applied consciously and reverted to / from nursing practice. Reasoning on the use of theory in practice has allowed us to give visibility to a theoretical model explicitly nursing authentically oriented to the needs of the person, in all its complexity in specific contexts.
Proposal of a theoretical model for the practical nurse
Directory of Open Access Journals (Sweden)
Dolores Abril Sabater
2010-01-01
Full Text Available AIM: To determine which model of nursing is proposed by care professionals and the reason for their choice. METHOD: cross-sectional, descriptive study design. The main variable: Nursing Models and Theories. As secondary variables were collected: age, gender, years of work experience, nursing model of basic training, and course/s related. We used a self-elaborated, anonymous questionnaire, passed between April - May, 2006. Not random sample.RESULTS: 546 nurses were invited, answered 205. 38 % response rate. Virginia Henderson was the more selected model (33%, however, 42% left the question blank, 12% indicated that they wanted to work under the guidance of a model. They selected a specifically model: Knowledge of the model to their training, standardization in other centers, the characteristics of the model itself and identification with its philosophy. They are not decided by a model by ignorance, lack of time and usefulness. CONCLUSIONS: The model chosen mostly for their daily work was Virginia Henderson model, so that knowledge of a model is the main reason for their election. Professionals who choose not to use the model in their practice realize offers and calling for resources, besides to explain the lack of knowledge on this topic. To advance the nursing profession is necessary that nurse is thought over widely on the abstract concepts of the theory in our context.
Phase-Field Formulation for Quantitative Modeling of Alloy Solidification
Energy Technology Data Exchange (ETDEWEB)
Karma, Alain
2001-09-10
A phase-field formulation is introduced to simulate quantitatively microstructural pattern formation in alloys. The thin-interface limit of this formulation yields a much less stringent restriction on the choice of interface thickness than previous formulations and permits one to eliminate nonequilibrium effects at the interface. Dendrite growth simulations with vanishing solid diffusivity show that both the interface evolution and the solute profile in the solid are accurately modeled by this approach.
Phase-Field Formulation for Quantitative Modeling of Alloy Solidification
Karma, Alain
2001-09-01
A phase-field formulation is introduced to simulate quantitatively microstructural pattern formation in alloys. The thin-interface limit of this formulation yields a much less stringent restriction on the choice of interface thickness than previous formulations and permits one to eliminate nonequilibrium effects at the interface. Dendrite growth simulations with vanishing solid diffusivity show that both the interface evolution and the solute profile in the solid are accurately modeled by this approach.
Theoretical models for NO decomposition in Cu-exchanged zeolites
Tsekov, R
2015-01-01
A unified description of the catalytic effect of Cu-exchanged zeolites is proposed for the decomposition of NO. A general expression for the rate constant of NO decomposition is obtained by assuming that the rate-determining step consists of the transferring of a single atom associated with breaking of the N-O bond. The analysis is performed on the base of the generalized Langevin equation and takes into account both the potential interactions in the system and the memory effects due to the zeolite vibrations. Two different mechanisms corresponding to monomolecular and bimolecular NO decomposition are discussed. The catalytic effect in the monomolecular mechanism is related to both the Cu+ ions and zeolite O-vacancies, while in the case of the bimolecular mechanism the zeolite contributes through dissipation only. The comparison of the theoretically calculated rate constants with experimental results reveals additional information about the geometric and energetic characteristics of the active centers and con...
Theoretical investigation of some thermal effects in turbulence modeling
Energy Technology Data Exchange (ETDEWEB)
Mathelin, Lionel [LIMSI-CNRS, Orsay (France); Bataille, Francoise [PROMES-CNRS, Perpignan (France); Ye, Zhou [Lawrence Livermore National Lab., Livermore, CA (United States)
2008-11-15
Fluid compressibility effects arising from thermal rather than dynamical aspects are theoretically investigated in the framework of turbulent flows. The Mach number is considered low and not to induce significant compressibility effects which here occur due to a very high thermal gradient within the flowfield. With the use of the Two-Scale Direct Interaction Approximation approach, essential turbulent correlations are derived in a one-point one-time framework. In the low velocity gradient limit, they are shown to directly depend on the temperature gradient, assumed large. The impact of thermal effects onto the transport equations of the turbulent kinetic energy and dissipation rate is also investigated, together with the transport equation for both the density and the internal energy variance.
A graph theoretical perspective of a drug abuse epidemic model
Nyabadza, F.; Mukwembi, S.; Rodrigues, B. G.
2011-05-01
A drug use epidemic can be represented by a finite number of states and transition rules that govern the dynamics of drug use in each discrete time step. This paper investigates the spread of drug use in a community where some users are in treatment and others are not in treatment, citing South Africa as an example. In our analysis, we consider the neighbourhood prevalence of each individual, i.e., the proportion of the individual’s drug user contacts who are not in treatment amongst all of his or her contacts. We introduce parameters α∗, β∗ and γ∗, depending on the neighbourhood prevalence, which govern the spread of drug use. We examine how changes in α∗, β∗ and γ∗ affect the system dynamics. Simulations presented support the theoretical results.
Asah, Stanley Tanyi
2008-12-01
The social-ecological system (SES) approach to natural resource management holds enormous promise towards achieving sustainability. Despite this promise, social-ecological interactions are complex and elusive; they require simplification to guide effective application of the SES approach. The complex, adaptive and place-specific nature of human-environment interactions impedes determination of state and trends in SES parameters of interest to managers and policy makers. Based on a rigorously developed systemic theoretical model, this paper integrates field observations, interviews, surveys, and latent variable modeling to illustrate the development of simplified and easily interpretable indicators of the state of, and trends in, relevant SES processes. Social-agricultural interactions in the Logone floodplain, in the Lake Chad basin, served as case study. This approach is found to generate simplified determinants of the state of SESs, easily communicable across the array of stakeholders common in human-environment interactions. The approach proves to be useful for monitoring SESs, guiding interventions, and assessing the effectiveness of interventions. It incorporates real time responses to biophysical change in understanding coarse scale processes within which finer scales are embedded. This paper emphasizes the importance of merging quantitative and qualitative methods for effective monitoring and assessment of SESs.
How trees allocate carbon for optimal growth: insight from a game-theoretic model.
Fu, Liyong; Sun, Lidan; Han, Hao; Jiang, Libo; Zhu, Sheng; Ye, Meixia; Tang, Shouzheng; Huang, Minren; Wu, Rongling
2017-02-01
How trees allocate photosynthetic products to primary height growth and secondary radial growth reflects their capacity to best use environmental resources. Despite substantial efforts to explore tree height-diameter relationship empirically and through theoretical modeling, our understanding of the biological mechanisms that govern this phenomenon is still limited. By thinking of stem woody biomass production as an ecological system of apical and lateral growth components, we implement game theory to model and discern how these two components cooperate symbiotically with each other or compete for resources to determine the size of a tree stem. This resulting allometry game theory is further embedded within a genetic mapping and association paradigm, allowing the genetic loci mediating the carbon allocation of stemwood growth to be characterized and mapped throughout the genome. Allometry game theory was validated by analyzing a mapping data of stem height and diameter growth over perennial seasons in a poplar tree. Several key quantitative trait loci were found to interpret the process and pattern of stemwood growth through regulating the ecological interactions of stem apical and lateral growth. The application of allometry game theory enables the prediction of the situations in which the cooperation, competition or altruism is an optimal decision of a tree to fully use the environmental resources it owns. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Hou, Chen; Amunugama, Kaushalya
2015-07-01
The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
The Power of a Good Idea: Quantitative Modeling of the Spread of Ideas from Epidemiological Models
Energy Technology Data Exchange (ETDEWEB)
Bettencourt, L. M. A. (LANL); Cintron-Arias, A. (Cornell University); Kaiser, D. I. (MIT); Castillo-Chavez, C. (Arizona State University)
2005-05-05
The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the three communities and find values for parameters reflecting both intentional social organization and long lifetimes for the idea. These features are probably general characteristics of the spread of ideas, but not of common epidemics.
A Review on Quantitative Models for Sustainable Food Logistics Management
Directory of Open Access Journals (Sweden)
M. Soysal
2012-12-01
Full Text Available The last two decades food logistics systems have seen the transition from a focus on traditional supply chain management to food supply chain management, and successively, to sustainable food supply chain management. The main aim of this study is to identify key logistical aims in these three phases and analyse currently available quantitative models to point out modelling challenges in sustainable food logistics management (SFLM. A literature review on quantitative studies is conducted and also qualitative studies are consulted to understand the key logistical aims more clearly and to identify relevant system scope issues. Results show that research on SFLM has been progressively developing according to the needs of the food industry. However, the intrinsic characteristics of food products and processes have not yet been handled properly in the identified studies. The majority of the works reviewed have not contemplated on sustainability problems, apart from a few recent studies. Therefore, the study concludes that new and advanced quantitative models are needed that take specific SFLM requirements from practice into consideration to support business decisions and capture food supply chain dynamics.
A transformative model for undergraduate quantitative biology education.
Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B
2010-01-01
The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.
Determinants of Business Success – Theoretical Model and Empirical Verification
Directory of Open Access Journals (Sweden)
Kozielski Robert
2016-12-01
Full Text Available Market knowledge, market orientation, learning competencies, and a business performance were the key issues of the research project conducted in the 2006 study. The main findings identified significant relationships between the independent variables (market knowledge, market orientation, learning competencies and the dependent variables (business success. A partial correlation analysis indicated that a business success primarily relies on organisational learning competencies. Organisational learning competencies, to a large extent (almost 60%, may be explained by the level of corporate market knowledge and market orientation. The aim of the paper is to evaluate to what extent the relationships between the variables are still valid. The research was based on primary and secondary data sources. The major field of the research was carried out in the form of quantitative studies. The results of the 2014 study are consistent with the previous (2006 results.
Theoretical and numerical analysis of a heat pump model utilizing Dufour effect
Hoshina, Minoru; Okuda, Koji
2017-03-01
A heat pump model utilizing the Dufour effect is proposed, and studied by numerical and theoretical analysis. Numerically, we perform MD simulations of this system and measure the cooling power and the coefficient of performance (COP) as figures of merit. Theoretically, we calculate the cooling power and the COP from the phenomenological equations describing this system by using the linear irreversible thermodynamics and compare the theoretical results with the MD results.
Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models
Directory of Open Access Journals (Sweden)
Tomasz Kajdanowicz
2016-09-01
Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.
QuantUM: Quantitative Safety Analysis of UML Models
Directory of Open Access Journals (Sweden)
Florian Leitner-Fischer
2011-07-01
Full Text Available When developing a safety-critical system it is essential to obtain an assessment of different design alternatives. In particular, an early safety assessment of the architectural design of a system is desirable. In spite of the plethora of available formal quantitative analysis methods it is still difficult for software and system architects to integrate these techniques into their every day work. This is mainly due to the lack of methods that can be directly applied to architecture level models, for instance given as UML diagrams. Also, it is necessary that the description methods used do not require a profound knowledge of formal methods. Our approach bridges this gap and improves the integration of quantitative safety analysis methods into the development process. All inputs of the analysis are specified at the level of a UML model. This model is then automatically translated into the analysis model, and the results of the analysis are consequently represented on the level of the UML model. Thus the analysis model and the formal methods used during the analysis are hidden from the user. We illustrate the usefulness of our approach using an industrial strength case study.
Adaptive supervision: a theoretical model for social workers.
Latting, J E
1986-01-01
Two models of leadership styles are prominent in the management field: Blake and Mouton's managerial Grid and Hersey and Blanchard's Situational Leadership Model. Much of the research on supervisory styles in social work has been based on the former. A recent public debate between the two sets of theorists suggests that both have strengths and limitations. Accordingly, an adaptive model of social work supervision that combines elements of both theories is proposed.
A Theoretical Study of Subsurface Drainage Model Simulation of ...
African Journals Online (AJOL)
A three-dimensional variable-density groundwater flow model, the SEAWAT model, was used to assess the influence of subsurface drain spacing, evapotranspiration and irrigation water quality on salt concentration at the base of the root zone, leaching and drainage in salt affected irrigated land. The study was carried out ...
A Theoretical Study of Subsurface Drainage Model Simulation of ...
African Journals Online (AJOL)
User
Abstract. A three-dimensional variable-density groundwater flow model, the SEAWAT model, was used to assess the influence of subsurface drain spacing, ... that is appropriate for groundwater of variable density (Bear, 1997; Evans ...... the effects of salinity and climate change on crop plants. Horti. Sci. 78: 159–174.
A theoretical Markov chain model for evaluating correctional ...
African Journals Online (AJOL)
A model is developed for comparing the effect of different correctional practices on people with criminal tendencies. The statistics for this comparison is the number of people whose criminal tendencies are completely destroyed at the end of their confinement. The model so developed is applied to a simulated data on ...
A theoretical design for learning model addressing the networked society
DEFF Research Database (Denmark)
Levinsen, Karin; Nielsen, Janni; Sørensen, Birgitte Holm
2010-01-01
of their collaboration and students are required to document their choices, deselections, decisions and arguments and reflect on their learning process during the role-play. In the final paragraph we discuss our experimental work with the Design for Learning Model. We argue that our model gives birth to scaffolding...
The theoretical foundations for size spectrum models of fish communities
DEFF Research Database (Denmark)
Andersen, Ken Haste; Jacobsen, Nis Sand; Farnsworth, K.D.
2016-01-01
. We demonstrate the differences between the models through examples of their response to fishing and their dynamic behavior. We review implementations of size spectrum models and describe important variations concerning the functional response, whether growth is food-dependent or fixed...
Theoretical Models of Tutor Talk: How Practical Are They?
Henning, Teresa B.
Writing center theory in general seems to favor a collaborative model of the tutorial where the tutor and tutee work together to create shared knowledge and a shared text and an expressionist model of the tutorial which requires that the tutor do less talking and more listening. Writing center empirical research, however, suggests that the key…
A new theoretical model of the quasistatic single-fiber pullout problem: Analysis of stress field
DEFF Research Database (Denmark)
Qing, Hai
2013-01-01
A new theoretical model is developed in order to predict the stress transfer during the quasistatic single-fibre pullout process. The theoretical approach retains all relevant stress and strain components, and satisfies exactly the interfacial continuity conditions and all the stress boundary con...
A quantitative magnetospheric model derived from spacecraft magnetometer data
Mead, G. D.; Fairfield, D. H.
1975-01-01
The model is derived by making least squares fits to magnetic field measurements from four Imp satellites. It includes four sets of coefficients, representing different degrees of magnetic disturbance as determined by the range of Kp values. The data are fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the effects of seasonal north-south asymmetries are contained. The expansion is divergence-free, but unlike the usual scalar potential expansion, the model contains a nonzero curl representing currents distributed within the magnetosphere. The latitude at the earth separating open polar cap field lines from field lines closing on the day side is about 5 deg lower than that determined by previous theoretically derived models. At times of high Kp, additional high-latitude field lines extend back into the tail. Near solstice, the separation latitude can be as low as 75 deg in the winter hemisphere. The average northward component of the external field is much smaller than that predicted by theoretical models; this finding indicates the important effects of distributed currents in the magnetosphere.
How fast is fisheries-induced evolution? Quantitative analysis of modelling and empirical studies
Audzijonyte, Asta; Kuparinen, Anna; Fulton, Elizabeth A
2013-01-01
A number of theoretical models, experimental studies and time-series studies of wild fish have explored the presence and magnitude of fisheries-induced evolution (FIE). While most studies agree that FIE is likely to be happening in many fished stocks, there are disagreements about its rates and implications for stock viability. To address these disagreements in a quantitative manner, we conducted a meta-analysis of FIE rates reported in theoretical and empirical studies. We discovered that rates of phenotypic change observed in wild fish are about four times higher than the evolutionary rates reported in modelling studies, but correlation between the rate of change and instantaneous fishing mortality (F) was very similar in the two types of studies. Mixed-model analyses showed that in the modelling studies traits associated with reproductive investment and growth evolved slower than rates related to maturation. In empirical observations age-at-maturation was changing faster than other life-history traits. We also found that, despite different assumption and modelling approaches, rates of evolution for a given F value reported in 10 of 13 modelling studies were not significantly different. PMID:23789026
A Game-Theoretic Model for Distributed Programming by Contract
DEFF Research Database (Denmark)
Henriksen, Anders Starcke; Hvitved, Tom; Filinski, Andrzej
2009-01-01
We present an extension of the programming-by-contract (PBC) paradigm to a concurrent and distributed environment. Classical PBC is characterized by absolute conformance of code to its specification, assigning blame in case of failures, and a hierarchical, cooperative decomposition model – none...... of which extend naturally to a distributed environment with multiple administrative peers. We therefore propose a more nuanced contract model based on quantifiable performance of implementations; assuming responsibility for success; and a fundamentally adversarial model of system integration, where each...
Quantitative modeling of transcription factor binding specificities using DNA shape.
Zhou, Tianyin; Shen, Ning; Yang, Lin; Abe, Namiko; Horton, John; Mann, Richard S; Bussemaker, Harmen J; Gordân, Raluca; Rohs, Remo
2015-04-14
DNA binding specificities of transcription factors (TFs) are a key component of gene regulatory processes. Underlying mechanisms that explain the highly specific binding of TFs to their genomic target sites are poorly understood. A better understanding of TF-DNA binding requires the ability to quantitatively model TF binding to accessible DNA as its basic step, before additional in vivo components can be considered. Traditionally, these models were built based on nucleotide sequence. Here, we integrated 3D DNA shape information derived with a high-throughput approach into the modeling of TF binding specificities. Using support vector regression, we trained quantitative models of TF binding specificity based on protein binding microarray (PBM) data for 68 mammalian TFs. The evaluation of our models included cross-validation on specific PBM array designs, testing across different PBM array designs, and using PBM-trained models to predict relative binding affinities derived from in vitro selection combined with deep sequencing (SELEX-seq). Our results showed that shape-augmented models compared favorably to sequence-based models. Although both k-mer and DNA shape features can encode interdependencies between nucleotide positions of the binding site, using DNA shape features reduced the dimensionality of the feature space. In addition, analyzing the feature weights of DNA shape-augmented models uncovered TF family-specific structural readout mechanisms that were not revealed by the DNA sequence. As such, this work combines knowledge from structural biology and genomics, and suggests a new path toward understanding TF binding and genome function.
Disease, illness and health: theoretical models of the disablement process.
Minaire, P.
1992-01-01
Handicap is the result of a process of disablement whose origin is a pathological condition (disease). According to some definitions of health (e.g., a state of complete physical, mental and social well-being), the classical biomedical concept is too restrictive to cover all the consequences of disease. New models have been proposed: the impairment-disability-handicap model presented by WHO, the situational handicap model, and the quality-of-life model. A unifying schema of the disablement process includes these concepts and provides a useful way of analysing the consequences of disease. Factors that modify the disablement process can be identified by their respective impacts, and provide operational guidelines for public health interventions. PMID:1386290
Theoretic models for recommendation and implementation of assistive technology
Directory of Open Access Journals (Sweden)
Ana Cristina de Jesus Alves
2016-07-01
Full Text Available Introduction: The latest international researches seek to understand the factors affecting the successful use of assistive technology devices through studies regarding the assessments systematizing; abandonment of devices; or theoric models that consider the aspects of those devices implementation. In Brazil the researches are focused on developing new technologies and there are still not sufficient studies related to the successful use of devices and ways of assistive technology implementation. Objective: To identify conceptual models used for indication and implementation of assistive technology devices. Method: Literature review. The survey was conducted in six databases: CINAHAL, Eric, GALE, LILACS, MEDLINE e PsycInfo. A critical analysis described by Grant and Booth was used. Results: There are no records of a Brazilian survey and among 29 selected articles, 17 conceptual models used in the area of AT were found; of these, 14 were specific to AT. The results showed that the new conceptual models of TA are under development and the conceptual model “Matching Person and Technology – MPT” was the most mentioned. Conclusion: We can observe that the practices related to TA area in international context shows a correlation with conceptual models, thus, we hope this study might have the capacity to contribute for the propagation of this precepts at national level
Theoretical models for duct acoustic propagation and radiation
Eversman, Walter
1991-01-01
The development of computational methods in acoustics has led to the introduction of analysis and design procedures which model the turbofan inlet as a coupled system, simultaneously modeling propagation and radiation in the presence of realistic internal and external flows. Such models are generally large, require substantial computer speed and capacity, and can be expected to be used in the final design stages, with the simpler models being used in the early design iterations. Emphasis is given to practical modeling methods that have been applied to the acoustical design problem in turbofan engines. The mathematical model is established and the simplest case of propagation in a duct with hard walls is solved to introduce concepts and terminologies. An extensive overview is given of methods for the calculation of attenuation in uniform ducts with uniform flow and with shear flow. Subsequent sections deal with numerical techniques which provide an integrated representation of duct propagation and near- and far-field radiation for realistic geometries and flight conditions.
Design theoretic analysis of three system modeling frameworks.
Energy Technology Data Exchange (ETDEWEB)
McDonald, Michael James
2007-05-01
This paper analyzes three simulation architectures from the context of modeling scalability to address System of System (SoS) and Complex System problems. The paper first provides an overview of the SoS problem domain and reviews past work in analyzing model and general system complexity issues. It then identifies and explores the issues of vertical and horizontal integration as well as coupling and hierarchical decomposition as the system characteristics and metrics against which the tools are evaluated. In addition, it applies Nam Suh's Axiomatic Design theory as a construct for understanding coupling and its relationship to system feasibility. Next it describes the application of MATLAB, Swarm, and Umbra (three modeling and simulation approaches) to modeling swarms of Unmanned Flying Vehicle (UAV) agents in relation to the chosen characteristics and metrics. Finally, it draws general conclusions for analyzing model architectures that go beyond those analyzed. In particular, it identifies decomposition along phenomena of interaction and modular system composition as enabling features for modeling large heterogeneous complex systems.
Energy Technology Data Exchange (ETDEWEB)
Doinikov, Alexander A., E-mail: doinikov@bsu.by; Bouakaz, Ayache [Inserm U930, Université François Rabelais, Tours 37044 (France); Sheeran, Paul S.; Dayton, Paul A. [Joint Department of Biomedical Engineering, The University of North Carolina and North Carolina State University, Chapel Hill, North Carolina 27599 (United States)
2014-10-15
Purpose: Perfluorocarbon (PFC) microdroplets, called phase-change contrast agents (PCCAs), are a promising tool in ultrasound imaging and therapy. Interest in PCCAs is motivated by the fact that they can be triggered to transition from the liquid state to the gas state by an externally applied acoustic pulse. This property opens up new approaches to applications in ultrasound medicine. Insight into the physics of vaporization of PFC droplets is vital for effective use of PCCAs and for anticipating bioeffects. PCCAs composed of volatile PFCs (with low boiling point) exhibit complex dynamic behavior: after vaporization by a short acoustic pulse, a PFC droplet turns into a vapor bubble which undergoes overexpansion and damped radial oscillation until settling to a final diameter. This behavior has not been well described theoretically so far. The purpose of our study is to develop an improved theoretical model that describes the vaporization dynamics of volatile PFC droplets and to validate this model by comparison with in vitro experimental data. Methods: The derivation of the model is based on applying the mathematical methods of fluid dynamics and thermodynamics to the process of the acoustic vaporization of PFC droplets. The used approach corrects shortcomings of the existing models. The validation of the model is carried out by comparing simulated results with in vitro experimental data acquired by ultrahigh speed video microscopy for octafluoropropane (OFP) and decafluorobutane (DFB) microdroplets of different sizes. Results: The developed theory allows one to simulate the growth of a vapor bubble inside a PFC droplet until the liquid PFC is completely converted into vapor, and the subsequent overexpansion and damped oscillations of the vapor bubble, including the influence of an externally applied acoustic pulse. To evaluate quantitatively the difference between simulated and experimental results, the L2-norm errors were calculated for all cases where the
Doinikov, Alexander A.; Sheeran, Paul S.; Bouakaz, Ayache; Dayton, Paul A.
2014-01-01
Purpose: Perfluorocarbon (PFC) microdroplets, called phase-change contrast agents (PCCAs), are a promising tool in ultrasound imaging and therapy. Interest in PCCAs is motivated by the fact that they can be triggered to transition from the liquid state to the gas state by an externally applied acoustic pulse. This property opens up new approaches to applications in ultrasound medicine. Insight into the physics of vaporization of PFC droplets is vital for effective use of PCCAs and for anticipating bioeffects. PCCAs composed of volatile PFCs (with low boiling point) exhibit complex dynamic behavior: after vaporization by a short acoustic pulse, a PFC droplet turns into a vapor bubble which undergoes overexpansion and damped radial oscillation until settling to a final diameter. This behavior has not been well described theoretically so far. The purpose of our study is to develop an improved theoretical model that describes the vaporization dynamics of volatile PFC droplets and to validate this model by comparison with in vitro experimental data. Methods: The derivation of the model is based on applying the mathematical methods of fluid dynamics and thermodynamics to the process of the acoustic vaporization of PFC droplets. The used approach corrects shortcomings of the existing models. The validation of the model is carried out by comparing simulated results with in vitro experimental data acquired by ultrahigh speed video microscopy for octafluoropropane (OFP) and decafluorobutane (DFB) microdroplets of different sizes. Results: The developed theory allows one to simulate the growth of a vapor bubble inside a PFC droplet until the liquid PFC is completely converted into vapor, and the subsequent overexpansion and damped oscillations of the vapor bubble, including the influence of an externally applied acoustic pulse. To evaluate quantitatively the difference between simulated and experimental results, the L2-norm errors were calculated for all cases where the
Wang, Chunkao; Da, Yang
2014-01-01
The traditional quantitative genetics model was used as the unifying approach to derive six existing and new definitions of genomic additive and dominance relationships. The theoretical differences of these definitions were in the assumptions of equal SNP effects (equivalent to across-SNP standardization), equal SNP variances (equivalent to within-SNP standardization), and expected or sample SNP additive and dominance variances. The six definitions of genomic additive and dominance relationships on average were consistent with the pedigree relationships, but had individual genomic specificity and large variations not observed from pedigree relationships. These large variations may allow finding least related genomes even within the same family for minimizing genomic relatedness among breeding individuals. The six definitions of genomic relationships generally had similar numerical results in genomic best linear unbiased predictions of additive effects (GBLUP) and similar genomic REML (GREML) estimates of additive heritability. Predicted SNP dominance effects and GREML estimates of dominance heritability were similar within definitions assuming equal SNP effects or within definitions assuming equal SNP variance, but had differences between these two groups of definitions. We proposed a new measure of genomic inbreeding coefficient based on parental genomic co-ancestry coefficient and genomic additive correlation as a genomic approach for predicting offspring inbreeding level. This genomic inbreeding coefficient had the highest correlation with pedigree inbreeding coefficient among the four methods evaluated for calculating genomic inbreeding coefficient in a Holstein sample and a swine sample. PMID:25517971
Toward a Theoretical Model of Employee Turnover: A Human Resource Development Perspective
Peterson, Shari L.
2004-01-01
This article sets forth the Organizational Model of Employee Persistence, influenced by traditional turnover models and a student attrition model. The model was developed to clarify the impact of organizational practices on employee turnover from a human resource development (HRD) perspective and provide a theoretical foundation for research on…
Model United Nations and Deep Learning: Theoretical and Professional Learning
Engel, Susan; Pallas, Josh; Lambert, Sarah
2017-01-01
This article demonstrates that the purposeful subject design, incorporating a Model United Nations (MUN), facilitated deep learning and professional skills attainment in the field of International Relations. Deep learning was promoted in subject design by linking learning objectives to Anderson and Krathwohl's (2001) four levels of knowledge or…
Theoretical Modeling of Mechanical Behavior and Release Properties of Microcapsules
Sagis, L.M.C.
2015-01-01
Microcapsules in food often have a shell with a complex microstructure; the mechanical and structural properties of these shells affect the response of the capsules to deforming forces and the release kinetics of encapsulated components. In this chapter we will discuss a number of models which are
Use of Theoretical Controls in Underwater Acoustic Model Evaluation.
1982-01-04
shallow water. 50. White, D., Normal Mode Evaluation of FASOR Shallow Water Areas, unpublished. 126 5.2.2 Assessment of Ray Theory Use of Normal Mode...Normal Mode Evaluation of FASOR Shallow Water Areas, unpublished. 51. Mitchell, S. K.. and J. J. Lemmon. A Ray Theory Model of Acoustic Interaction
A theoretical and empirical model for soil conservation using ...
African Journals Online (AJOL)
This paper illuminates the practice of indigenous soil conservation among Mamasani farmers in Fars province in Iran. Bos's decision making model was used as a conceptual framework for the study. A qualitative paradigm was used as research methodology. Qualitative techniques were: Mind Mapping, RRA ...
A theoretical Markov chain model for evaluating correctional ...
African Journals Online (AJOL)
In this paper a stochastic method is applied in the study of the long time effect of confinement in a correctional institution on the behaviour of a person with criminal tendencies. The approach used is Markov chain, which uses past history to predict the state of a system in the future. A model is developed for comparing the ...
An Alternative Theoretical Model for Economic Reforms in Africa ...
African Journals Online (AJOL)
This paper offers an alternative model for economic reforms in Africa. It proposes that Africa can still get on the pathway of sustained economic growth if economic reforms can focus on a key variable, namely, the price of non-tradables. Prices of non-tradables are generally less in Africa than in advanced economies, and the ...
Photoabsorption spectrum of helium trimer cation--theoretical modeling.
Kalus, René; Karlický, František; Lepetit, Bruno; Paidarová, Ivana; Gadea, Florent Xavier
2013-11-28
The photoabsorption spectrum of He3(+) is calculated for two semiempirical models of intracluster interactions and compared with available experimental data reported in the middle UV range [H. Haberland and B. von Issendorff, J. Chem. Phys. 102, 8773 (1995)]. Nuclear delocalization effects are investigated via several approaches comprising quantum samplings using either exact or approximate (harmonic) nuclear wavefunctions, as well as classical samplings based on the Monte Carlo methodology. Good agreement with the experiment is achieved for the model by Knowles et al., [Mol. Phys. 85, 243 (1995); Mol. Phys. 87, 827 (1996)] whereas the model by Calvo et al., [J. Chem. Phys. 135, 124308 (2011)] exhibits non-negligible deviations from the experiment. Predictions of far UV absorption spectrum of He3(+), for which no experimental data are presently available, are reported for both models and compared to each other as well as to the photoabsorption spectrum of He2(+). A simple semiempirical point-charge approximation for calculating transition probabilities is shown to perform well for He3(+).
Photoabsorption spectrum of helium trimer cation—Theoretical modeling
Kalus, René; Karlický, František; Lepetit, Bruno; Paidarová, Ivana; Gadea, Florent Xavier
2013-11-01
The photoabsorption spectrum of He_3^+ is calculated for two semiempirical models of intracluster interactions and compared with available experimental data reported in the middle UV range [H. Haberland and B. von Issendorff, J. Chem. Phys. 102, 8773 (1995)]. Nuclear delocalization effects are investigated via several approaches comprising quantum samplings using either exact or approximate (harmonic) nuclear wavefunctions, as well as classical samplings based on the Monte Carlo methodology. Good agreement with the experiment is achieved for the model by Knowles et al., [Mol. Phys. 85, 243 (1995); Knowles et al., Mol. Phys. 87, 827 (1996)] whereas the model by Calvo et al., [J. Chem. Phys. 135, 124308 (2011)] exhibits non-negligible deviations from the experiment. Predictions of far UV absorption spectrum of He_3^+, for which no experimental data are presently available, are reported for both models and compared to each other as well as to the photoabsorption spectrum of He_2^+. A simple semiempirical point-charge approximation for calculating transition probabilities is shown to perform well for He_3^+.
Photoabsorption spectrum of helium trimer cation—Theoretical modeling
Energy Technology Data Exchange (ETDEWEB)
Kalus, René [Centre of Excellence IT4Innovations and Department of Applied Mathematics, VSB-Technical University of Ostrava, 17. listopadu 15, 708 33 Ostrava (Czech Republic); Karlický, František [Regional Centre of Advanced Technologies and Materials and Department of Physical Chemistry, Faculty of Science, Palacký University, Tř. 17. listopadu 12, 771 46 Olomouc (Czech Republic); Lepetit, Bruno [Laboratoire Collisions Agrégats Réactivité, IRSAMC and UMR5589 du CNRS, Université de Toulouse, UPS, 118 route de Narbonne, 31062 Toulouse Cedex (France); Paidarová, Ivana [J. Heyrovský Institute of Physical Chemistry, ASCR, v.v.i., Dolejškova 3, 182 23 Praha (Czech Republic); Gadea, Florent Xavier [Laboratoire de Chimie et de Physique Quantiques, IRSAMC and UMR5626 du CNRS, Université de Toulouse, UPS, 118 route de Narbonne, 31062 Toulouse Cedex (France)
2013-11-28
The photoabsorption spectrum of He{sub 3}{sup +} is calculated for two semiempirical models of intracluster interactions and compared with available experimental data reported in the middle UV range [H. Haberland and B. von Issendorff, J. Chem. Phys. 102, 8773 (1995)]. Nuclear delocalization effects are investigated via several approaches comprising quantum samplings using either exact or approximate (harmonic) nuclear wavefunctions, as well as classical samplings based on the Monte Carlo methodology. Good agreement with the experiment is achieved for the model by Knowles et al., [Mol. Phys. 85, 243 (1995); Mol. Phys. 87, 827 (1996)] whereas the model by Calvo et al., [J. Chem. Phys. 135, 124308 (2011)] exhibits non-negligible deviations from the experiment. Predictions of far UV absorption spectrum of He{sub 3}{sup +}, for which no experimental data are presently available, are reported for both models and compared to each other as well as to the photoabsorption spectrum of He{sub 2}{sup +}. A simple semiempirical point-charge approximation for calculating transition probabilities is shown to perform well for He{sub 3}{sup +}.
Theoretical model of chirality-induced helical self-propulsion
Yamamoto, Takaki; Sano, Masaki
2018-01-01
We recently reported the experimental realization of a chiral artificial microswimmer exhibiting helical self-propulsion [T. Yamamoto and M. Sano, Soft Matter 13, 3328 (2017), 10.1039/C7SM00337D]. In the experiment, cholesteric liquid crystal (CLC) droplets dispersed in surfactant solutions swam spontaneously, driven by the Marangoni flow, in helical paths whose handedness is determined by the chirality of the component molecules of CLC. To study the mechanism of the emergence of the helical self-propelled motion, we propose a phenomenological model of the self-propelled helical motion of the CLC droplets. Our model is constructed by symmetry argument in chiral systems, and it describes the dynamics of CLC droplets with coupled time-evolution equations in terms of a velocity, an angular velocity, and a tensor variable representing the symmetry of the helical director field of the droplet. We found that helical motions as well as other chiral motions appear in our model. By investigating bifurcation behaviors between each chiral motion, we found that the chiral coupling terms between the velocity and the angular velocity, the structural anisotropy of the CLC droplet, and the nonlinearity of model equations play a crucial role in the emergence of the helical motion of the CLC droplet.
Gramatica, Paola; Papa, Ester; Luini, Mara; Monti, Elena; Gariboldi, Marzia B; Ravera, Mauro; Gabano, Elisabetta; Gaviglio, Luca; Osella, Domenico
2010-09-01
Several Pt(IV) complexes of the general formula [Pt(L)2(L')2(L'')2] [axial ligands L are Cl-, RCOO-, or OH-; equatorial ligands L' are two am(m)ine or one diamine; and equatorial ligands L'' are Cl- or glycolato] were rationally designed and synthesized in the attempt to develop a predictive quantitative structure-activity relationship (QSAR) model. Numerous theoretical molecular descriptors were used alongside physicochemical data (i.e., reduction peak potential, Ep, and partition coefficient, log Po/w) to obtain a validated QSAR between in vitro cytotoxicity (half maximal inhibitory concentrations, IC50, on A2780 ovarian and HCT116 colon carcinoma cell lines) and some features of Pt(IV) complexes. In the resulting best models, a lipophilic descriptor (log Po/w or the number of secondary sp3 carbon atoms) plus an electronic descriptor (Ep, the number of oxygen atoms, or the topological polar surface area expressed as the N,O polar contribution) is necessary for modeling, supporting the general finding that the biological behavior of Pt(IV) complexes can be rationalized on the basis of their cellular uptake, the Pt(IV)-->Pt(II) reduction, and the structure of the corresponding Pt(II) metabolites. Novel compounds were synthesized on the basis of their predicted cytotoxicity in the preliminary QSAR model, and were experimentally tested. A final QSAR model, based solely on theoretical molecular descriptors to ensure its general applicability, is proposed.
Grietens, H; Hellinckx, W
Parental awareness refers to parents' perceptions and making sense out of children's responses and behaviours. This study examined a theoretical model on the determinants of disturbed parental awareness, with a central place given to Belsky's buffer hypothesis. Maternal characteristics were
Accelerator simulation and theoretical modelling of radiation effects (SMoRE)
2018-01-01
This publication summarizes the findings and conclusions of the IAEA coordinated research project (CRP) on accelerator simulation and theoretical modelling of radiation effects, aimed at supporting Member States in the development of advanced radiation-resistant structural materials for implementation in innovative nuclear systems. This aim can be achieved through enhancement of both experimental neutron-emulation capabilities of ion accelerators and improvement of the predictive efficiency of theoretical models and computer codes. This dual approach is challenging but necessary, because outputs of accelerator simulation experiments need adequate theoretical interpretation, and theoretical models and codes need high dose experimental data for their verification. Both ion irradiation investigations and computer modelling have been the specific subjects of the CRP, and the results of these studies are presented in this publication which also includes state-ofthe- art reviews of four major aspects of the project...
Gonzálvez, Alicia G; González Ureña, Ángel
2012-10-01
A laser spectroscopic technique is described that combines transmission and resonance-enhanced Raman inelastic scattering together with low laser power (Raman signal dependence on the sample thickness is also presented. Essentially, the model considers the sample to be homogeneous and describes the underlying physics using only three parameters: the Raman cross-section, the laser-radiation attenuation cross-section, and the Raman signal attenuation cross-section. The model was applied successfully to describe the sample-size dependence of the Raman signal in both β-carotene standards and carrot roots. The present technique could be useful for direct, fast, and nondestructive investigations in food quality control and analytical or physiological studies of animal and human tissues.
Emotional development in infancy: theoretical models and nursing implications.
Kearney, J A
1997-01-01
Theories of emotional development in infancy that are relevant to clinical practice. A discussion of the complementary theories such as Emde's (1989) psychoanalytically based developmental model of early socioemotional reorganizations and Stern's (1985a) model of the developing self, with the goal of presenting an integrated view of key socioemotional processes and their developmental foundations during infancy. Concepts such as "emotional availability" and "affect attunement" are examined as vehicles for early socioemotional organization and adaptation. Relevant literature from nursing, child psychiatry, and developmental psychology. Implications are drawn from clinical assessment of high-risk caregiver-infant relationships and early causal pathways for later socioemotional dysfunction. Nurses should focus on the importance of assessing interpersonal variables and their ongoing contribution to internalizing patterns of dysfunctional behavior in children.
Finding conservatism in theoretical noise models for wind power projects
Energy Technology Data Exchange (ETDEWEB)
Drew, Teresa; Wierzba, Paul [RWDI Air, Inc (Canada)], email: teresa.drew@rwdi.com, email: paul.wierzba@rwdi.com
2011-07-01
Wind power projects have been scrutinized for their noise impact by public opinion, developers, and regulating bodies. Computer modelling is used to predict such noise impact, and is often run with conservative parameters, taken from the ISO 9613 standard, relative to the noise impact of industrial wind turbines. This paper examines the influence of the various modelling software parameters on wind power project noise impact results. This analysis leads to identification of parameters that can lend a measure of conservatism to future predictions by focusing primarily on software configuration and acoustic attenuation options, crucial points in determining realistic noise impact. Changes in the parameters of a hypothetical wind turbine influence the final noise predictions, and sensitivity to these changes is examined, to establish initial realistic assumptions with regard to localization and height of wind turbines.
Generic Theoretical Models to Predict Division Patterns of Cleaving Embryos
Pierre, Anaëlle; Sallé, Jérémy; Wühr, Martin; Minc, Nicolas
2016-01-01
International audience; Life for all animals starts with a precise 3D choreography of reductive divisions of the fertilized egg, known as cleavage patterns. These patterns exhibit conserved geometrical features and striking interspecies invariance within certain animal classes. To identify the generic rules that may govern these morphogenetic events, we developed a 3D-modeling framework that iteratively infers blastomere division positions and orientations, and consequent multicellular arrang...
Hybrid empirical--theoretical approach to modeling uranium adsorption
Energy Technology Data Exchange (ETDEWEB)
Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W
2004-05-01
An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K{sub f} parameter is correlated to sediment surface area (r{sup 2}=0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth.
A Theoretical Model for the Associative Nature of Conference Participation.
Directory of Open Access Journals (Sweden)
Jelena Smiljanić
Full Text Available Participation in conferences is an important part of every scientific career. Conferences provide an opportunity for a fast dissemination of latest results, discussion and exchange of ideas, and broadening of scientists' collaboration network. The decision to participate in a conference depends on several factors like the location, cost, popularity of keynote speakers, and the scientist's association with the community. Here we discuss and formulate the problem of discovering how a scientist's previous participation affects her/his future participations in the same conference series. We develop a stochastic model to examine scientists' participation patterns in conferences and compare our model with data from six conferences across various scientific fields and communities. Our model shows that the probability for a scientist to participate in a given conference series strongly depends on the balance between the number of participations and non-participations during his/her early connections with the community. An active participation in a conference series strengthens the scientist's association with that particular conference community and thus increases the probability of future participations.
Model for Quantitative Evaluation of Enzyme Replacement Treatment
Directory of Open Access Journals (Sweden)
Radeva B.
2009-12-01
Full Text Available Gaucher disease is the most frequent lysosomal disorder. Its enzyme replacement treatment was the new progress of modern biotechnology, successfully used in the last years. The evaluation of optimal dose of each patient is important due to health and economical reasons. The enzyme replacement is the most expensive treatment. It must be held continuously and without interruption. Since 2001, the enzyme replacement therapy with Cerezyme*Genzyme was formally introduced in Bulgaria, but after some time it was interrupted for 1-2 months. The dose of the patients was not optimal. The aim of our work is to find a mathematical model for quantitative evaluation of ERT of Gaucher disease. The model applies a kind of software called "Statistika 6" via the input of the individual data of 5-year-old children having the Gaucher disease treated with Cerezyme. The output results of the model gave possibilities for quantitative evaluation of the individual trends in the development of the disease of each child and its correlation. On the basis of this results, we might recommend suitable changes in ERT.
Uncertainty Model For Quantitative Precipitation Estimation Using Weather Radars
Directory of Open Access Journals (Sweden)
Ernesto Gómez Vargas
2016-06-01
Full Text Available This paper introduces an uncertainty model for the quantitatively estimate precipitation using weather radars. The model considers various key aspects associated to radar calibration, attenuation, and the tradeoff between accuracy and radar coverage. An S-band-radar case study is presented to illustrate particular fractional-uncertainty calculations obtained to adjust various typical radar-calibration elements such as antenna, transmitter, receiver, and some other general elements included in the radar equation. This paper is based in “Guide to the expression of Uncertainty in measurement” and the results show that the fractional uncertainty calculated by the model was 40 % for the reflectivity and 30% for the precipitation using the Marshall Palmer Z-R relationship.
Quantitative Methods in Supply Chain Management Models and Algorithms
Christou, Ioannis T
2012-01-01
Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...
Quantitative risk assessment modeling for nonhomogeneous urban road tunnels.
Meng, Qiang; Qu, Xiaobo; Wang, Xinchang; Yuanita, Vivi; Wong, Siew Chee
2011-03-01
Urban road tunnels provide an increasingly cost-effective engineering solution, especially in compact cities like Singapore. For some urban road tunnels, tunnel characteristics such as tunnel configurations, geometries, provisions of tunnel electrical and mechanical systems, traffic volumes, etc. may vary from one section to another. These urban road tunnels that have characterized nonuniform parameters are referred to as nonhomogeneous urban road tunnels. In this study, a novel quantitative risk assessment (QRA) model is proposed for nonhomogeneous urban road tunnels because the existing QRA models for road tunnels are inapplicable to assess the risks in these road tunnels. This model uses a tunnel segmentation principle whereby a nonhomogeneous urban road tunnel is divided into various homogenous sections. Individual risk for road tunnel sections as well as the integrated risk indices for the entire road tunnel is defined. The article then proceeds to develop a new QRA model for each of the homogeneous sections. Compared to the existing QRA models for road tunnels, this section-based model incorporates one additional top event-toxic gases due to traffic congestion-and employs the Poisson regression method to estimate the vehicle accident frequencies of tunnel sections. This article further illustrates an aggregated QRA model for nonhomogeneous urban tunnels by integrating the section-based QRA models. Finally, a case study in Singapore is carried out. © 2010 Society for Risk Analysis.
Quantitative modeling of a gene's expression from its intergenic sequence.
Samee, Md Abul Hassan; Sinha, Saurabh
2014-03-01
Modeling a gene's expression from its intergenic locus and trans-regulatory context is a fundamental goal in computational biology. Owing to the distributed nature of cis-regulatory information and the poorly understood mechanisms that integrate such information, gene locus modeling is a more challenging task than modeling individual enhancers. Here we report the first quantitative model of a gene's expression pattern as a function of its locus. We model the expression readout of a locus in two tiers: 1) combinatorial regulation by transcription factors bound to each enhancer is predicted by a thermodynamics-based model and 2) independent contributions from multiple enhancers are linearly combined to fit the gene expression pattern. The model does not require any prior knowledge about enhancers contributing toward a gene's expression. We demonstrate that the model captures the complex multi-domain expression patterns of anterior-posterior patterning genes in the early Drosophila embryo. Altogether, we model the expression patterns of 27 genes; these include several gap genes, pair-rule genes, and anterior, posterior, trunk, and terminal genes. We find that the model-selected enhancers for each gene overlap strongly with its experimentally characterized enhancers. Our findings also suggest the presence of sequence-segments in the locus that would contribute ectopic expression patterns and hence were "shut down" by the model. We applied our model to identify the transcription factors responsible for forming the stripe boundaries of the studied genes. The resulting network of regulatory interactions exhibits a high level of agreement with known regulatory influences on the target genes. Finally, we analyzed whether and why our assumption of enhancer independence was necessary for the genes we studied. We found a deterioration of expression when binding sites in one enhancer were allowed to influence the readout of another enhancer. Thus, interference between enhancer
GSTARS computer models and their applications, part I: theoretical development
Yang, C.T.; Simoes, F.J.M.
2008-01-01
GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two- dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.
Chanda, P; Zhang, A; Ramanathan, M
2011-10-01
To develop a model synthesis method for parsimoniously modeling gene-environmental interactions (GEI) associated with clinical outcomes and phenotypes. The AMBROSIA model synthesis approach utilizes the k-way interaction information (KWII), an information-theoretic metric capable of identifying variable combinations associated with GEI. For model synthesis, AMBROSIA considers relevance of combinations to the phenotype, it precludes entry of combinations with redundant information, and penalizes for unjustifiable complexity; each step is KWII based. The performance and power of AMBROSIA were evaluated with simulations and Genetic Association Workshop 15 (GAW15) data sets of rheumatoid arthritis (RA). AMBROSIA identified parsimonious models in data sets containing multiple interactions with linkage disequilibrium present. For the GAW15 data set containing 9187 single-nucleotide polymorphisms, the parsimonious AMBROSIA model identified nine RA-associated combinations with power >90%. AMBROSIA was compared with multifactor dimensionality reduction across several diverse models and had satisfactory power. Software source code is available from http://www.cse.buffalo.edu/DBGROUP/bioinformatics/resources.html. AMBROSIA is a promising method for GEI model synthesis.
DEFF Research Database (Denmark)
Andersen, Karsten Brandt; Levinsen, Simon; Svendsen, Winnie Edith
2009-01-01
In this article we present a generalized theoretical model for the continuous separation of particles using the pinched flow fractionation method. So far the theoretical models have not been able to predict the separation of particles without the use of correction factors. In this article we...... present a model which is capable of predicting the separation from first principles. Furthermore we comment on the importance of the incorporation of the finite height of the micro fluidic channels in the models describing the system behavior. We compare our model with the experiment obtained by Seki et...
Fluorescence detection by intensity change based sensors: a theoretical model.
Galbán, Javier; Delgado-Camón, Arantzazu; Cebolla, Vicente L; de Marcos, Susana; Polo, Víctor; Mateos, Elena
2012-01-01
According to Fluorescence Detection by Intensity Changes (FDIC) the fluorescence intensity of many fluorophores depends on the non-covalent (specific and/or non-specific) interactions these fluorophores would be able to establish with the solvent and, more interestingly, with other surrounding molecules. This latter effect is the basis of FDIC for analytical purposes. In this paper, a preliminary study of FDIC applications using a fluorophore supported in a solid medium (sensor film) is presented. First, a mathematical model relating the analyte concentration with the immobilized fluorophore fluorescence is deduced. The model includes all the different mechanisms explaining this relationship: index of refraction or dielectric constant modification, scattering coefficient alteration and sensor film volume increase. Then, the very first experimental results are presented, using different fluorophores and solid supports. The best results were obtained using polyacrylamide (PAA) polymers and coralyne as the fluorophore. This sensor film is applied for albumin and polyethylenglycol determination and the results are compared with those obtained using coralyne in solution. Albumin quenches the coralyne fluorescence in both cases (solution and film), while PEG quenches coralyne fluorescence in films but increases it in solution. These results suggest that the outstanding fluorescence change mechanism is sensor films is the film volume increases, which is different than those observed in solution.
Directory of Open Access Journals (Sweden)
Jeannette Bischkopf
2008-07-01
Full Text Available Udo KELLE adds to the debate on integrating qualitative and quantitative methods by going beyond epistemological and pragmatic arguments and placing a stronger emphasis on the area of research and the actual research questions addressed empirically in social science. The fact that social scientific research mainly involves middle-range theories poses specific methodological problems for each of the two research traditions and makes their combination inevitable. The integrative methodological program the author develops can be understood as a new guiding program for evaluating the strengths and weaknesses of qualitative as well as quantitative methods and for deciding in which area of research and for which research questions each methods is best suited. The book provides a wealth of arguments for a theoretical and methodological integration of the two research traditions but these arguments could have been made more accessible by including more examples from actual research projects. URN: urn:nbn:de:0114-fqs080386
Theoretical modelling of hot gas ingestion through turbine rim seals
Directory of Open Access Journals (Sweden)
J. Michael Owen
2012-12-01
The nozzle guide vanes create three-dimensional (3D variations in the distribution of pressure in the mainstream annulus and the turbine blades create unsteady effects. Computational fluid dynamics (CFD is both time-consuming and expensive for these 3D unsteady flows, and engine designers tend to use correlations or simple models to predict ingress. This paper describes the application of simple ‘orifice models’, the analytical solutions of which can be used to calculate the sealing effectiveness of turbine rim seals. The solutions agree well with available data for externally-induced ingress, where the effects of rotation are negligible, for rotationally-induced ingress, where the effects of the external flow are small, and for combined ingress, where the effects of both external flow and rotation are significant.
Theoretical model of the helium zone plate microscope
Salvador Palau, Adrià; Bracco, Gianangelo; Holst, Bodil
2017-01-01
Neutral helium microscopy is a new technique currently under development. Its advantages are the low energy, charge neutrality, and inertness of the helium atoms, a potential large depth of field, and the fact that at thermal energies the helium atoms do not penetrate into any solid material. This opens the possibility, among others, for the creation of an instrument that can measure surface topology on the nanoscale, even on surfaces with high aspect ratios. One of the most promising designs for helium microscopy is the zone plate microscope. It consists of a supersonic expansion helium beam collimated by an aperture (skimmer) focused by a Fresnel zone plate onto a sample. The resolution is determined by the focal spot size, which depends on the size of the skimmer, the optics of the system, and the velocity spread of the beam through the chromatic aberrations of the zone plate. An important factor for the optics of the zone plate is the width of the outermost zone, corresponding to the smallest opening in the zone plate. The width of the outermost zone is fabrication limited to around 10 nm with present-day state-of-the-art technology. Due to the high ionization potential of neutral helium atoms, it is difficult to build efficient helium detectors. Therefore, it is crucial to optimize the microscope design to maximize the intensity for a given resolution and width of the outermost zone. Here we present an optimization model for the helium zone plate microscope. Assuming constant resolution and width of the outermost zone, we are able to reduce the problem to a two-variable problem (zone plate radius and object distance) and we show that for a given beam temperature and pressure, there is always a single intensity maximum. We compare our model with the highest-resolution zone plate focusing images published and show that the intensity can be increased seven times. Reducing the width of the outermost zone to 10 nm leads to an increase in intensity of more than 8000
Further Theoretical Investigations of the Trion Model of Cortical Organization
McGrann, John Vincent
The trion model represents a mathematical realization of Mountcastle's columnar organization principle of cortex. A trion represents a localized group of neurons with three levels of firing which are interconnected to form trion networks. The striking results of Fisher's ANNNI spin model were incorporated by using connections that are localized, highly structured in space and time, and competing by having a balance between excitation and inhibition. These networks have large repertoires of spatial-temporal patterns, MPs, that can be readily learned using a Hebb learning rule with only small changes in the connections. As the synaptic noise, or temperature T, is varied a series of phase transitions at precise values T(n) were found giving new repertoires of MPs, and the average time for any initial firing configuration to project onto an MP shows a quite sharp change at each T(n). Near a phase transition, in a Monte Carlo simulation, the temporal evolution wanders back and forth between sets of MPs in contrast to the more structured sequential evolutions far from a T(n). Thresholds and learning are shown to be enhanced near the transition points. The temporal dependence of the Hebb learning rule was investigated and shown to be predominantly inhibitory for correlations that are out of phase and excitatory for correlations that are in phase. This type of learning proceeds by a selectional principle which can proceed much more rapidly than instructional learning and with comparatively small changes in connectivity. A novel treatment for epilepsy is proposed and experimental support on the elimination of an epileptic focus by patterned electrical stimulation would have fundamental scientific importance in addition to the enormous clinical relevance. The information processing by these cortical networks is shown to be performing various symmetry operations as the network evolves into sequences of the various spatial -temporal patterns. Rotation, time-reversal, parity
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.
Cobbs, Gary
2012-08-16
Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction
Directory of Open Access Journals (Sweden)
Cobbs Gary
2012-08-01
Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the
A new journal - "Theoretical Biology and Medical Modelling".
Wheatley, Denys N
2005-06-12
Biology has a conceptual basis that allows one to build models and theorize across many life sciences, including medicine and medically-related disciplines. A dearth of good venues for publication has been perceived during a period when bioinformatics, systems analysis and biomathematics are burgeoning. Steps have been taken to provide the sort of journal with a quick turnaround time for manuscripts which is online and freely accessible to all readers, whatever their persuasion or discipline. We have now been running for some time a journal which has had many good papers presented pre-launch, and a steady stream of papers thereafter. The value of this journal as a new venue has already been vindicated. Within a short space of time, we have founded a state-of-the-art electronic journal freely accessible to all in a much sort-after interdisciplinary field that will be of benefit to the thinking life scientist, which must include medically qualified doctors as well as scientists who prefer to build their new hypotheses on basic principles and sound concepts underpinning biology. At the same time, these principles are not sacrosanct and require critical analysis. The journal http://www.tbiomed.com promises to deliver many exciting ideas in the future.
A new journal – "Theoretical Biology and Medical Modelling"
Directory of Open Access Journals (Sweden)
Wheatley Denys N
2005-06-01
Full Text Available Abstract Biology has a conceptual basis that allows one to build models and theorize across many life sciences, including medicine and medically-related disciplines. A dearth of good venues for publication has been perceived during a period when bioinformatics, systems analysis and biomathematics are burgeoning. Steps have been taken to provide the sort of journal with a quick turnaround time for manuscripts which is online and freely accessible to all readers, whatever their persuasion or discipline. We have now been running for some time a journal which has had many good papers presented pre-launch, and a steady stream of papers thereafter. The value of this journal as a new venue has already been vindicated. Within a short space of time, we have founded a state-of-the-art electronic journal freely accessible to all in a much sort-after interdisciplinary field that will be of benefit to the thinking life scientist, which must include medically qualified doctors as well as scientists who prefer to build their new hypotheses on basic principles and sound concepts underpinning biology. At the same time, these principles are not sacrosanct and require critical analysis. The journal http://www.tbiomed.com promises to deliver many exciting ideas in the future.
Description of group-theoretical model of developed turbulence
Energy Technology Data Exchange (ETDEWEB)
Saveliev, V L [Institute of Ionosphere, Almaty 050020 (Kazakhstan); Gorokhovski, M A [Laboratoire de Mecanique des Fluides et Acoustique, Ecole Centrale de Lyon, 36, Avenue Guy de Collongues, F69134 Ecully-Cedex (France)], E-mail: saveliev@topmail.kz, E-mail: mikhael.gorokhovski@ec-lyon.fr
2008-12-15
We propose to associate the phenomenon of stationary turbulence with the special self-similar solutions of the Euler equations. These solutions represent the linear superposition of eigenfields of spatial symmetry subgroup generators and imply their dependence on time through the parameter of the symmetry transformation only. From this model, it follows that for developed turbulent process, changing the scale of averaging (filtering) of the velocity field is equivalent to composition of scaling, translation and rotation transformations. We call this property a renormalization-group invariance of filtered turbulent fields. The renormalization group invariance provides an opportunity to transform the averaged Navier-Stokes equation over a small scale (inner threshold of the turbulence) to larger scales by simple scaling. From the methodological point of view, it is significant to note that the turbulent viscosity term appeared not as a result of averaging of the nonlinear term in the Navier-Stokes equation, but from the molecular viscosity term with the help of renormalization group transformation.
THEORETICAL FLOW MODEL THROUGH A CENTRIFUGAL PUMP USED FOR WATER SUPPLY IN AGRICULTURE IRRIGATION
Directory of Open Access Journals (Sweden)
SCHEAUA Fanel Dorel
2017-05-01
motion of the rotor. A theoretical model for calculating the flow of the working fluid through the interior of a centrifugal pump model is presented in this paper as well as the numerical analysis on the virtual model performed with the ANSYS CFX software in order to highlight the flow parameters and flow path-lines that are formed during centrifugal pump operation.
Theoretical models for fluid thermodynamics based on the quasi-Gaussian entropy theory
Amadei, Andrea
1998-01-01
Summary The theoretical modeling of fluid thermodynamics is one of the most challenging fields in physical chemistry. In fact the fluid behavior, except at very low density conditions, is still extremely difficult to be modeled from a statistical mechanical point of view, as for any realistic model
Quantitative identification of technological discontinuities using simulation modeling
Park, Hyunseok
2016-01-01
The aim of this paper is to develop and test metrics to quantitatively identify technological discontinuities in a knowledge network. We developed five metrics based on innovation theories and tested the metrics by a simulation model-based knowledge network and hypothetically designed discontinuity. The designed discontinuity is modeled as a node which combines two different knowledge streams and whose knowledge is dominantly persistent in the knowledge network. The performances of the proposed metrics were evaluated by how well the metrics can distinguish the designed discontinuity from other nodes on the knowledge network. The simulation results show that the persistence times # of converging main paths provides the best performance in identifying the designed discontinuity: the designed discontinuity was identified as one of the top 3 patents with 96~99% probability by Metric 5 and it is, according to the size of a domain, 12~34% better than the performance of the second best metric. Beyond the simulation ...
Fusing Quantitative Requirements Analysis with Model-based Systems Engineering
Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven
2006-01-01
A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.
Quantitative modeling of the ionospheric response to geomagnetic activity
Directory of Open Access Journals (Sweden)
T. J. Fuller-Rowell
Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard
A beginner's guide to writing the nursing conceptual model-based theoretical rationale.
Gigliotti, Eileen; Manister, Nancy N
2012-10-01
Writing the theoretical rationale for a study can be a daunting prospect for novice researchers. Nursing's conceptual models provide excellent frameworks for placement of study variables, but moving from the very abstract concepts of the nursing model to the less abstract concepts of the study variables is difficult. Similar to the five-paragraph essay used by writing teachers to assist beginning writers to construct a logical thesis, the authors of this column present guidelines that beginners can follow to construct their theoretical rationale. This guide can be used with any nursing conceptual model but Neuman's model was chosen here as the exemplar.
Automated quantitative gait analysis in animal models of movement disorders
Directory of Open Access Journals (Sweden)
Vandeputte Caroline
2010-08-01
Full Text Available Abstract Background Accurate and reproducible behavioral tests in animal models are of major importance in the development and evaluation of new therapies for central nervous system disease. In this study we investigated for the first time gait parameters of rat models for Parkinson's disease (PD, Huntington's disease (HD and stroke using the Catwalk method, a novel automated gait analysis test. Static and dynamic gait parameters were measured in all animal models, and these data were compared to readouts of established behavioral tests, such as the cylinder test in the PD and stroke rats and the rotarod tests for the HD group. Results Hemiparkinsonian rats were generated by unilateral injection of the neurotoxin 6-hydroxydopamine in the striatum or in the medial forebrain bundle. For Huntington's disease, a transgenic rat model expressing a truncated huntingtin fragment with multiple CAG repeats was used. Thirdly, a stroke model was generated by a photothrombotic induced infarct in the right sensorimotor cortex. We found that multiple gait parameters were significantly altered in all three disease models compared to their respective controls. Behavioural deficits could be efficiently measured using the cylinder test in the PD and stroke animals, and in the case of the PD model, the deficits in gait essentially confirmed results obtained by the cylinder test. However, in the HD model and the stroke model the Catwalk analysis proved more sensitive than the rotarod test and also added new and more detailed information on specific gait parameters. Conclusion The automated quantitative gait analysis test may be a useful tool to study both motor impairment and recovery associated with various neurological motor disorders.
Quantitative Modeling of Human-Environment Interactions in Preindustrial Time
Sommer, Philipp S.; Kaplan, Jed O.
2017-04-01
Quantifying human-environment interactions and anthropogenic influences on the environment prior to the Industrial revolution is essential for understanding the current state of the earth system. This is particularly true for the terrestrial biosphere, but marine ecosystems and even climate were likely modified by human activities centuries to millennia ago. Direct observations are however very sparse in space and time, especially as one considers prehistory. Numerical models are therefore essential to produce a continuous picture of human-environment interactions in the past. Agent-based approaches, while widely applied to quantifying human influence on the environment in localized studies, are unsuitable for global spatial domains and Holocene timescales because of computational demands and large parameter uncertainty. Here we outline a new paradigm for the quantitative modeling of human-environment interactions in preindustrial time that is adapted to the global Holocene. Rather than attempting to simulate agency directly, the model is informed by a suite of characteristics describing those things about society that cannot be predicted on the basis of environment, e.g., diet, presence of agriculture, or range of animals exploited. These categorical data are combined with the properties of the physical environment in coupled human-environment model. The model is, at its core, a dynamic global vegetation model with a module for simulating crop growth that is adapted for preindustrial agriculture. This allows us to simulate yield and calories for feeding both humans and their domesticated animals. We couple this basic caloric availability with a simple demographic model to calculate potential population, and, constrained by labor requirements and land limitations, we create scenarios of land use and land cover on a moderate-resolution grid. We further implement a feedback loop where anthropogenic activities lead to changes in the properties of the physical
Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang
2015-11-17
Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.
Simple control-theoretic models of human steering activity in visually guided vehicle control
Hess, Ronald A.
1991-01-01
A simple control theoretic model of human steering or control activity in the lateral-directional control of vehicles such as automobiles and rotorcraft is discussed. The term 'control theoretic' is used to emphasize the fact that the model is derived from a consideration of well-known control system design principles as opposed to psychological theories regarding egomotion, etc. The model is employed to emphasize the 'closed-loop' nature of tasks involving the visually guided control of vehicles upon, or in close proximity to, the earth and to hypothesize how changes in vehicle dynamics can significantly alter the nature of the visual cues which a human might use in such tasks.
A THEORETICAL MODEL OF SUPPORTING OPEN SOURCE FRONT END INNOVATION THROUGH IDEA MANAGEMENT
DEFF Research Database (Denmark)
Aagaard, Annabeth
2013-01-01
to overcome these various challenges companies are looking for new models to support FEI. This theoretical paper explores in what way idea management may be applied as a tool in facilitation of front end innovation and how this facilitation may be captured in a conceptual model. First, I show through...... a literature study, how idea management and front end innovation are related and how they may support each other. Secondly, I present a theoretical model of how idea management may be applied in support of the open source front end of new product innovations. Thirdly, I present different venues of further...
Quantitative genetic models of sexual selection by male choice.
Nakahashi, Wataru
2008-09-01
There are many examples of male mate choice for female traits that tend to be associated with high fertility. I develop quantitative genetic models of a female trait and a male preference to show when such a male preference can evolve. I find that a disagreement between the fertility maximum and the viability maximum of the female trait is necessary for directional male preference (preference for extreme female trait values) to evolve. Moreover, when there is a shortage of available male partners or variance in male nongenetic quality, strong male preference can evolve. Furthermore, I also show that males evolve to exhibit a stronger preference for females that are more feminine (less resemblance to males) than the average female when there is a sexual dimorphism caused by fertility selection which acts only on females.
Energy Technology Data Exchange (ETDEWEB)
Lind, M. [Oersted - DTU, Kgs. Lyngby (Denmark)
2005-10-01
Multilevel Flow Modeling (MFM) has proven to be an effective modeling tool for reasoning about plant failure and control strategies and is currently exploited for operator support in diagnosis and on-line alarm analysis. Previous MFM research was focussed on representing goals and functions of process plants which generate, transform and distribute mass and energy. However, only a limited consideration has been given to the problems of modeling the control systems. Control functions are indispensable for operating any industrial plant. But modeling of control system functions has proven to be a more challenging problem than modeling functions of energy and mass processes. The problems were discussed by Lind and tentative solutions has been proposed but have not been investigated in depth until recently, partly due to the lack of an appropriate theoretical foundation. The purposes of the present report are to show that such a theoretical foundation for modeling goals and functions of control systems can be built from concepts and theories of action developed by Von Wright and to show how the theoretical foundation can be used to extend MFM with concepts for modeling control systems. The theoretical foundations has been presented in detail elsewhere by the present author without the particular focus on modeling control actions and MFM adopted here. (au)
Cross-Cultural Teamwork in End User Computing: A Theoretical Model.
Bento, Regina F.
1995-01-01
Presents a theoretical model explaining how cultural influences may affect the open, dynamic system of a cross-cultural, end-user computing team. Discusses the relationship between cross-cultural factors and various parts of the model such as: input variables, the system itself, outputs, and implications for the management of such teams. (JKP)
Direct Simulation Monte Carlo for Atmospheric Entry. 1. Theoretical Basis and Physical Models
2009-09-01
example the Cercignani -Lampis-Lord (CLL) model [52]. Such models tend to have a stronger theoretical basis, like using a reciprocity relation, and...by J.H. de Leeuw, Academic Press, New York, 1966, Vol. 2, p. 505. [52] Lord, R.G., “Some Extensions to the Cercignani -Lampis Gas Scattering Kernel
Quantitative Modelling of Trace Elements in Hard Coal.
Smoliński, Adam; Howaniec, Natalia
2016-01-01
The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.
Quantitative model for the generic 3D shape of ICMEs at 1 AU
Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.
2016-10-01
Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.
Grünkorn, Juliane; Belzen, Annette Upmeier zu; Krüger, Dirk
2014-07-01
Research in the field of students' understandings of models and their use in science describes different frameworks concerning these understandings. Currently, there is no conjoint framework that combines these structures and so far, no investigation has focused on whether it reflects students' understandings sufficiently (empirical evaluation). Therefore, the purpose of this article is to present the results of an empirical evaluation of a conjoint theoretical framework. The theoretical framework integrates relevant research findings and comprises five aspects which are subdivided into three levels each: nature of models, multiple models, purpose of models, testing, and changing models. The study was conducted with a sample of 1,177 seventh to tenth graders (aged 11-19 years) using open-ended items. The data were analysed by identifying students' understandings of models (nature of models and multiple models) and their use in science (purpose of models, testing, and changing models), and comparing as well as assigning them to the content of the theoretical framework. A comprehensive category system of students' understandings was thus developed. Regarding the empirical evaluation, the students' understandings of the nature and the purpose of models were sufficiently described by the theoretical framework. Concerning the understandings of multiple, testing, and changing models, additional initial understandings (only one model possible, no testing of models, and no change of models) need to be considered. This conjoint and now empirically tested framework for students' understandings can provide a common basis for future science education research. Furthermore, evidence-based indications can be provided for teachers and their instructional practice.
Melanoma screening: Informing public health policy with quantitative modelling.
Directory of Open Access Journals (Sweden)
Stephen Gilmore
Full Text Available Australia and New Zealand share the highest incidence rates of melanoma worldwide. Despite the substantial increase in public and physician awareness of melanoma in Australia over the last 30 years-as a result of the introduction of publicly funded mass media campaigns that began in the early 1980s -mortality has steadily increased during this period. This increased mortality has led investigators to question the relative merits of primary versus secondary prevention; that is, sensible sun exposure practices versus early detection. Increased melanoma vigilance on the part of the public and among physicians has resulted in large increases in public health expenditure, primarily from screening costs and increased rates of office surgery. Has this attempt at secondary prevention been effective? Unfortunately epidemiologic studies addressing the causal relationship between the level of secondary prevention and mortality are prohibitively difficult to implement-it is currently unknown whether increased melanoma surveillance reduces mortality, and if so, whether such an approach is cost-effective. Here I address the issue of secondary prevention of melanoma with respect to incidence and mortality (and cost per life saved by developing a Markov model of melanoma epidemiology based on Australian incidence and mortality data. The advantages of developing a methodology that can determine constraint-based surveillance outcomes are twofold: first, it can address the issue of effectiveness; and second, it can quantify the trade-off between cost and utilisation of medical resources on one hand, and reduced morbidity and lives saved on the other. With respect to melanoma, implementing the model facilitates the quantitative determination of the relative effectiveness and trade-offs associated with different levels of secondary and tertiary prevention, both retrospectively and prospectively. For example, I show that the surveillance enhancement that began in
Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S; Breen, Lauren J; Witt, Regina R; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin
2016-01-01
Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care.
Expectancy-Violation and Information-Theoretic Models of Melodic Complexity
Directory of Open Access Journals (Sweden)
Tuomas Eerola
2016-07-01
Full Text Available The present study assesses two types of models for melodic complexity: one based on expectancy violations and the other one related to an information-theoretic account of redundancy in music. Seven different datasets spanning artificial sequences, folk and pop songs were used to refine and assess the models. The refinement eliminated unnecessary components from both types of models. The final analysis pitted three variants of the two model types against each other and could explain from 46-74% of the variance in the ratings across the datasets. The most parsimonious models were identified with an information-theoretic criterion. This suggested that the simplified expectancy-violation models were the most efficient for these sets of data. However, the differences between all optimized models were subtle in terms both of performance and simplicity.
Hill, T. W.
1994-01-01
The unexpected patterns of high-latitude auroral luminosity and ionospheric convection that are observed when the interplanetary magnetic field (IMF) has a northward orientation have inspired a variety of theoretical interpretations. The existing models, all referring to steady-state conditions, can be classified according to the topology of the polar magnetic field lines and of the polar-cap convection streamlines. The classes of model include: (1) a closed magnetosphere model, (2) a conventional open model with a distorted, but topologically unchanged, polar-cap boundary, (3) a conventional open model with distorted, but topologically unchanged, polar-cap convection cells, (4) a modified open model with 'lobe convection cells' contained wholly on open magnetic-field lines, and (5) a modified open model with a bifurcated polar cap. The third and fourth types require significant regions of sunward flow on open polar-cap field lines, a concept that presents serious theoretical difficulties. The other three types appear equally viable from a theoretical point of view, and the comparison against observations is an ongoing enterprise. Outstanding theoretical questions include (a) how do observed structures in the polar ionosphere map along magnetic field lines into the magnetosphere?, (b) what is the mechanism that drives the observed sunward convection at highest latitudes on the day side?, and (c) what role does time dependence play in the observed phenomena?
Dostanić, J; Lončarević, D; Zlatar, M; Vlahović, F; Jovanović, D M
2016-10-05
A series of arylazo pyridone dyes was synthesized by changing the type of the substituent group in the diazo moiety, ranging from strong electron-donating to strong electron-withdrawing groups. The structural and electronic properties of the investigated dyes was calculated at the M062X/6-31+G(d,p) level of theory. The observed good linear correlations between atomic charges and Hammett σp constants provided a basis to discuss the transmission of electronic substituent effects through a dye framework. The reactivity of synthesized dyes was tested through their decolorization efficiency in TiO2 photocatalytic system (Degussa P-25). Quantitative structure-activity relationship analysis revealed a strong correlation between reactivity of investigated dyes and Hammett substituent constants. The reaction was facilitated by electron-withdrawing groups, and retarded by electron-donating ones. Quantum mechanical calculations was used in order to describe the mechanism of the photocatalytic oxidation reactions of investigated dyes and interpret their reactivities within the framework of the Density Functional Theory (DFT). According to DFT based reactivity descriptors, i.e. Fukui functions and local softness, the active site moves from azo nitrogen atom linked to benzene ring to pyridone carbon atom linked to azo bond, going from dyes with electron-donating groups to dyes with electron-withdrawing groups. Copyright © 2016 Elsevier B.V. All rights reserved.
Quantitative Modeling of the Alternative Pathway of the Complement System.
Zewde, Nehemiah; Gorham, Ronald D; Dorado, Angel; Morikis, Dimitrios
2016-01-01
The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection.
Cassani, Stefano; Kovarich, Simona; Papa, Ester; Roy, Partha Pratim; van der Wal, Leon; Gramatica, Paola
2013-08-15
Due to their chemical properties synthetic triazoles and benzo-triazoles ((B)TAZs) are mainly distributed to the water compartments in the environment, and because of their wide use the potential effects on aquatic organisms are cause of concern. Non testing approaches like those based on quantitative structure-activity relationships (QSARs) are valuable tools to maximize the information contained in existing experimental data and predict missing information while minimizing animal testing. In the present study, externally validated QSAR models for the prediction of acute (B)TAZs toxicity in Daphnia magna and Oncorhynchus mykiss have been developed according to the principles for the validation of QSARs and their acceptability for regulatory purposes, proposed by the Organization for Economic Co-operation and Development (OECD). These models are based on theoretical molecular descriptors, and are statistically robust, externally predictive and characterized by a verifiable structural applicability domain. They have been applied to predict acute toxicity for over 300 (B)TAZs without experimental data, many of which are in the pre-registration list of the REACH regulation. Additionally, a model based on quantitative activity-activity relationships (QAAR) has been developed, which allows for interspecies extrapolation from daphnids to fish. The importance of QSAR/QAAR, especially when dealing with specific chemical classes like (B)TAZs, for screening and prioritization of pollutants under REACH, has been highlighted. Copyright © 2013 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Dostanić, J., E-mail: jasmina@nanosys.ihtm.bg.ac.rs [University of Belgrade, Institute of Chemistry, Technology and Metallurgy, Department of Catalysis and Chemical Engineering, Njegoševa 12, 11000 Belgrade (Serbia); Lončarević, D. [University of Belgrade, Institute of Chemistry, Technology and Metallurgy, Department of Catalysis and Chemical Engineering, Njegoševa 12, 11000 Belgrade (Serbia); Zlatar, M. [University of Belgrade, Institute of Chemistry, Technology and Metallurgy, Department of Chemistry, Njegoševa 12, 11000 Belgrade (Serbia); Vlahović, F. [University of Belgrade, Innovation center of the Faculty of Chemistry, 11000 Belgrade (Serbia); Jovanović, D.M. [University of Belgrade, Institute of Chemistry, Technology and Metallurgy, Department of Catalysis and Chemical Engineering, Njegoševa 12, 11000 Belgrade (Serbia)
2016-10-05
Highlights: • Electronic effects of para substituted arylazo pyridone dyes. • Linear relationship between Hammett σ{sub p} constants and dyes photoreactivity. • The photocatalytic reactions facilitated by el.-acceptors and retarded by el.-donors. • Fukui functions to analyze the reactivity on concurrent sites within a molecule. • Hydroxyl radicals sustain attack from two reaction sites, depending on a substituent type. - Abstract: A series of arylazo pyridone dyes was synthesized by changing the type of the substituent group in the diazo moiety, ranging from strong electron-donating to strong electron-withdrawing groups. The structural and electronic properties of the investigated dyes was calculated at the M062X/6-31 + G(d,p) level of theory. The observed good linear correlations between atomic charges and Hammett σ{sub p} constants provided a basis to discuss the transmission of electronic substituent effects through a dye framework. The reactivity of synthesized dyes was tested through their decolorization efficiency in TiO{sub 2} photocatalytic system (Degussa P-25). Quantitative structure-activity relationship analysis revealed a strong correlation between reactivity of investigated dyes and Hammett substituent constants. The reaction was facilitated by electron-withdrawing groups, and retarded by electron-donating ones. Quantum mechanical calculations was used in order to describe the mechanism of the photocatalytic oxidation reactions of investigated dyes and interpret their reactivities within the framework of the Density Functional Theory (DFT). According to DFT based reactivity descriptors, i.e. Fukui functions and local softness, the active site moves from azo nitrogen atom linked to benzene ring to pyridone carbon atom linked to azo bond, going from dyes with electron-donating groups to dyes with electron-withdrawing groups.
User profile modeling for building recommendation systems: a theoretical study and state of the art
Directory of Open Access Journals (Sweden)
BARTH, F. J.
2010-12-01
Full Text Available The goal of this tutorial is to describe and synthesize the concepts and techniques used in the design of recommendation systems that can deal with user profiles. The development of such recommendation systems requires solutions of two sub problems: (i the creation and maintenance of user profile, and; (ii the appropriate use of user profiles. This work is a theoretical tutorial on this subject. This is a useful text for people who are interested in the theoretical foundations of modeling user profile and recommendation systems. This text presents illustrative diagrams that summarize the main components used in the modeling of user profiles
Multifocality and recurrence risk: a quantitative model of field cancerization.
Foo, Jasmine; Leder, Kevin; Ryser, Marc D
2014-08-21
Primary tumors often emerge within genetically altered fields of premalignant cells that appear histologically normal but have a high chance of progression to malignancy. Clinical observations have suggested that these premalignant fields pose high risks for emergence of recurrent tumors if left behind after surgical removal of the primary tumor. In this work, we develop a spatio-temporal stochastic model of epithelial carcinogenesis, combining cellular dynamics with a general framework for multi-stage genetic progression to cancer. Using the model, we investigate how various properties of the premalignant fields depend on microscopic cellular properties of the tissue. In particular, we provide analytic results for the size-distribution of the histologically undetectable premalignant fields at the time of diagnosis, and investigate how the extent and the geometry of these fields depend upon key groups of parameters associated with the tissue and genetic pathways. We also derive analytical results for the relative risks of local vs. distant secondary tumors for different parameter regimes, a critical aspect for the optimal choice of post-operative therapy in carcinoma patients. This study contributes to a growing literature seeking to obtain a quantitative understanding of the spatial dynamics in cancer initiation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Towards a quantitative model of the post-synaptic proteome.
Sorokina, Oksana; Sorokin, Anatoly; Armstrong, J Douglas
2011-10-01
The postsynaptic compartment of the excitatory glutamatergic synapse contains hundreds of distinct polypeptides with a wide range of functions (signalling, trafficking, cell-adhesion, etc.). Structural dynamics in the post-synaptic density (PSD) are believed to underpin cognitive processes. Although functionally and morphologically diverse, PSD proteins are generally enriched with specific domains, which precisely define the mode of clustering essential for signal processing. We applied a stochastic calculus of domain binding provided by a rule-based modelling approach to formalise the highly combinatorial signalling pathway in the PSD and perform the numerical analysis of the relative distribution of protein complexes and their sizes. We specified the combinatorics of protein interactions in the PSD by rules, taking into account protein domain structure, specific domain affinity and relative protein availability. With this model we interrogated the critical conditions for the protein aggregation into large complexes and distribution of both size and composition. The presented approach extends existing qualitative protein-protein interaction maps by considering the quantitative information for stoichiometry and binding properties for the elements of the network. This results in a more realistic view of the postsynaptic proteome at the molecular level.
A note on the derivation of theoretical autocovariances for ARMA models
McKenzie, Edward
1984-01-01
Derivation of the theoretical autocovariances of an ARMA model is important for a number of purposes associated with the estimation and testing of the model. One common algorithm, due to McLeod (1975), involves solving a system of linear equations. By deriving the determinant of the matrix of coefficients in these equations we can ascertain the behaviour of the algorithm with respect to the stationarity of the ARMA model. supported by the Naval Postgraduate School Foundation...
Hauser, H.; Melikhov, Yevgen; Jiles, David
2007-01-01
Two recent theoretical hysteresis models (Jiles-Atherton model and energetic model) are examined with respect to their capability to describe the dependence of the magnetization on magnetic field, microstructure, and anisotropy. It is shown that the classical Rayleigh law for the behavior of magnetization at low fields and the Stoner-Wohlfarth theory of domain magnetization rotation in noninteracting magnetic single domain particles can be considered as limiting cases of a more general theore...
Quantitative phase-field modeling for boiling phenomena.
Badillo, Arnoldo
2012-10-01
A phase-field model is developed for quantitative simulation of bubble growth in the diffusion-controlled regime. The model accounts for phase change and surface tension effects at the liquid-vapor interface of pure substances with large property contrast. The derivation of the model follows a two-fluid approach, where the diffuse interface is assumed to have an internal microstructure, defined by a sharp interface. Despite the fact that phases within the diffuse interface are considered to have their own velocities and pressures, an averaging procedure at the atomic scale, allows for expressing all the constitutive equations in terms of mixture quantities. From the averaging procedure and asymptotic analysis of the model, nonconventional terms appear in the energy and phase-field equations to compensate for the variation of the properties across the diffuse interface. Without these new terms, no convergence towards the sharp-interface model can be attained. The asymptotic analysis also revealed a very small thermal capillary length for real fluids, such as water, that makes impossible for conventional phase-field models to capture bubble growth in the millimeter range size. For instance, important phenomena such as bubble growth and detachment from a hot surface could not be simulated due to the large number of grids points required to resolve all the scales. Since the shape of the liquid-vapor interface is primarily controlled by the effects of an isotropic surface energy (surface tension), a solution involving the elimination of the curvature from the phase-field equation is devised. The elimination of the curvature from the phase-field equation changes the length scale dominating the phase change from the thermal capillary length to the thickness of the thermal boundary layer, which is several orders of magnitude larger. A detailed analysis of the phase-field equation revealed that a split of this equation into two independent parts is possible for system sizes
DEFF Research Database (Denmark)
Poulsen, Lars; Jazdzyk, M; Communal, J.-E.
2007-01-01
process is modeled by a Monte Carlo approach including homo and hetero transfer steps with multi-acceptor distribution. In this dense system, the classical Förster point-dipole approach for energy transfer breaks down, and the hopping rates are therefore calculated on the basis of a quantum...
Theoretical-empirical model of the steam-water cycle of the power unit
Directory of Open Access Journals (Sweden)
Grzegorz Szapajko
2010-06-01
Full Text Available The diagnostics of the energy conversion systems’ operation is realised as a result of collecting, processing, evaluatingand analysing the measurement signals. The result of the analysis is the determination of the process state. It requires a usageof the thermal processes models. Construction of the analytical model with the auxiliary empirical functions built-in brings satisfyingresults. The paper presents theoretical-empirical model of the steam-water cycle. Worked out mathematical simulation model containspartial models of the turbine, the regenerative heat exchangers and the condenser. Statistical verification of the model is presented.
Quantitative property-structural relation modeling on polymeric dielectric materials
Wu, Ke
Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix
A quantitative model of cellular elasticity based on tensegrity.
Stamenović, D; Coughlin, M F
2000-02-01
A tensegrity structure composed of six struts interconnected with 24 elastic cables is used as a quantitative model of the steady-state elastic response of cells, with the struts and cables representing microtubules and actin filaments, respectively. The model is stretched uniaxially and the Young's modulus (E0) is obtained from the initial slope of the stress versus strain curve of an equivalent continuum. It is found that E0 is directly proportional to the pre-existing tension in the cables (or compression in the struts) and inversely proportional to the cable (or strut) length square. This relationship is used to predict the upper and lower bounds of E0 of cells, assuming that the cable tension equals the yield force of actin (approximately 400 pN) for the upper bound, and that the strut compression equals the critical buckling force of microtubules for the lower bound. The cable (or strut) length is determined from the assumption that model dimensions match the diameter of probes used in standard mechanical tests on cells. Predicted values are compared to reported data for the Young's modulus of various cells. If the probe diameter is greater than or equal to 3 microns, these data are closer to the lower bound than to the upper bound. This, in turn, suggests that microtubules of the CSK carry initial compression that exceeds their critical buckling force (order of 10(0)-10(1) pN), but is much smaller than the yield force of actin. If the probe diameter is less than or equal to 2 microns, experimental data fall outside the region defined by the upper and lower bounds.
Pei, Jian; Xie, Tao-Rong; Yan, Zhe; Chen, Shu-De; Qiao, Deng Jiang
2011-06-01
Recently, biological effects induced by weak electromagnetic fields have been a public concern. Our previous study found temperature and electromagnetic field co-effects on insulin conformation. Therefore, in the present study, Raman spectroscopy was employed to investigate the secondary structure changes of insulin molecule induced by pulsed electric field (PEF) exposure at various temperatures. The content changes in alpha helix of insulin were obtained. Then, protein helix-random coil transition model was used to quantitatively study the experimental results. The theoretical model could figure out the effect of PEF on alpha helix contents of insulin at different temperatures. The protein secondary structure transits from helix to random coil evoked by PEF exposure and change of thermodynamic environment, which could explain the reason for the decline of alpha helix content of insulin caused by PEF exposure together with temperature rising. The results offer experimental basis and theoretical reference for further study of the mechanism of nonthermal effects of weak electromagnetic fields on biological molecule secondary structure.
Apparel shopping behaviour – Part 1: Towards the development of a conceptual theoretical model
Directory of Open Access Journals (Sweden)
R Du Preez
2003-10-01
Full Text Available Apparel shopping behaviour in a multicultural society is a complex phenomenon. The objective of this paper is to analyse various theoretical models from two disciplines, namely Consumer Behaviour and Clothing, and to develop a new conceptual theoretical model focussing on variables influencing apparel shopping behaviour in a multicultural consumer society. Variables were presented as market dominated, consumer dominated, and/or market and consumer interaction variables. Retailers, marketers, educators, researchers and students could benefit from the proposed model and recommendations are made in this regard. Part 2 reports on an empirical study based on the proposed conceptual theoretical model and discusses market segments and profiles. Opsomming Klere-aankoopgedrag in ’n multi-kulturele verbruikersamelewing is ’n komplekse fenomeen. Die doelwit van die artikel is om verskeie teoretiese modelle vanuit twee dissiplines, naamlik Verbruikersielkunde en Kleding, te analiseer. ’n Nuwe konseptuele teoretiese model is ontwikkel. Die model fokus op veranderlikes wat klereaankoopgedrag in ’n multi-kulturele verbruikersamelewing beïnvloed. Veranderlikes word gegroepeer op grond van die mate waartoe dit oorheers word deur die mark, verbruiker en/of die interaksie tussen die mark en die verbruiker. Kleinhandelaars, bemarkers, opvoeders, navorsers en studente sou kon voordeel trek uit die voorgestelde model. Aanbevelings word in dié verband gemaak. In Deel 2 word ’n empiriese studie gerapporteer. Dié studie is op hierdie voorgestelde konseptuele teoretiese model gegrond en marksegmente sowel as profiele word bespreek.
Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study
DEFF Research Database (Denmark)
Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming
2014-01-01
that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy...... consumption.We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....
Quantitative Model for Supply Chain Visibility: Process Capability Perspective
Directory of Open Access Journals (Sweden)
Youngsu Lee
2016-01-01
Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.
Models of the Bilingual Lexicon and Their Theoretical Implications for CLIL
Heine, Lena
2014-01-01
Although many advances have been made in recent years concerning the theoretical dimensions of content and language integrated learning (CLIL), research still has to meet the necessity to come up with integrative models that adequately map the interrelation between content and language learning in CLIL contexts. This article will suggest that…
Mumcu, Hayal Yavuz
2016-01-01
The purpose of this theoretical study is to explore the relationships between the concepts of using mathematics in the daily life, mathematical applications, mathematical modelling, and mathematical literacy. As these concepts are generally taken as independent concepts in the related literature, they are confused with each other and it becomes…
Aquino, Katherine C.
2016-01-01
Disability is often viewed as an obstacle to postsecondary inclusion, but not a characteristic of student diversity. Additionally, current theoretical frameworks isolate disability from other student diversity characteristics. In response, a new conceptual framework, the Disability-Diversity (Dis)Connect Model (DDDM), was created to address…
Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling
DEFF Research Database (Denmark)
Zimmermann, Ralf; Bertram, Anna
2017-01-01
Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...
A Game-Theoretic Model of Grounding for Referential Communication Tasks
Thompson, William
2009-01-01
Conversational grounding theory proposes that language use is a form of rational joint action, by which dialog participants systematically and collaboratively add to their common ground of shared knowledge and beliefs. Following recent work applying "game theory" to pragmatics, this thesis develops a game-theoretic model of grounding that…
Piper, Llewellyn E
2006-01-01
This article proposes a theoretical model for leaders to use to address organizational human conflict and disruptive behavior in health care organizations. Leadership is needed to improve interpersonal relationships within the workforce. A workforce with a culture of internal conflict will be unable to achieve its full potential to delivery quality patient care.
Scheepers, P.L.H.; Felling, A.; Peters, J.
1990-01-01
To explain ethnocentrism in the Netherlands, a classic model derived from theoretical notions of prominent members of the Frankfurt School is updated and tested with data of a national sample of Dutch respondents (N = 1799). It appears that authoritarianism is a far more important predictor of
Scientific-theoretical principles of the functioning insurance and funded pension system models
Directory of Open Access Journals (Sweden)
L. Khachumova
2015-01-01
Full Text Available Retracted articleThe main value of the pension system in the world is to improve the living standards of citizens in old age, and income replacement by smoothing consumption function. The article reveals the scien tific and theoretical apparatus and operation of the insurance models and funded pension system at the country level.
Validation of a Theoretical Model of Diagnostic Classroom Assessment: A Mixed Methods Study
Koh, Nancy
2012-01-01
The purpose of the study was to validate a theoretical model of diagnostic, formative classroom assessment called, "Proximal Assessment for Learner Diagnosis" (PALD). To achieve its purpose, the study employed a two-stage, mixed-methods design. The study utilized multiple data sources from 11 elementary level mathematics teachers who…
Testing process predictions of models of risky choice: a quantitative model comparison approach.
Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard
2013-01-01
This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called "similarity." In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.
Papa, Ester; Villa, Fulvio; Gramatica, Paola
2005-01-01
The use of Quantitative Structure-Activity Relationships in assessing the potential negative effects of chemicals plays an important role in ecotoxicology. (LC50)(96h) in Pimephales promelas (Duluth database) is widely modeled as an aquatic toxicity end-point. The object of this study was to compare different molecular descriptors in the development of new statistically validated QSAR models to predict the aquatic toxicity of chemicals classified according to their MOA and in a unique general model. The applied multiple linear regression approach (ordinary least squares) is based on theoretical molecular descriptor variety (1D, 2D, and 3D, from DRAGON package, and some calculated logP). The best combination of modeling descriptors was selected by the Genetic Algorithm-Variable Subset Selection procedure. The robustness and the predictive performance of the proposed models was verified using both internal (cross-validation by LOO, bootstrap, Y-scrambling) and external statistical validations (by splitting the original data set into training and validation sets by Kohonen-artificial neural networks (K-ANN)). The model applicability domain (AD) was checked by the leverage approach to verify prediction reliability.
Monroe, Scott M.; Mineka, Susan
2008-01-01
Our commentary was intended to stimulate discussion about what we perceive to be shortcomings of the mnemonic model and its research base, in the hope of shedding some light on key questions for understanding posttraumatic stress disorder (PTSD). In our view, Berntsen, Rubin, and Bohni have responded only to what they perceive to be shortcomings…
δ-Cut Decision-Theoretic Rough Set Approach: Model and Attribute Reductions
Directory of Open Access Journals (Sweden)
Hengrong Ju
2014-01-01
Full Text Available Decision-theoretic rough set is a quite useful rough set by introducing the decision cost into probabilistic approximations of the target. However, Yao’s decision-theoretic rough set is based on the classical indiscernibility relation; such a relation may be too strict in many applications. To solve this problem, a δ-cut decision-theoretic rough set is proposed, which is based on the δ-cut quantitative indiscernibility relation. Furthermore, with respect to criterions of decision-monotonicity and cost decreasing, two different algorithms are designed to compute reducts, respectively. The comparisons between these two algorithms show us the following: (1 with respect to the original data set, the reducts based on decision-monotonicity criterion can generate more rules supported by the lower approximation region and less rules supported by the boundary region, and it follows that the uncertainty which comes from boundary region can be decreased; (2 with respect to the reducts based on decision-monotonicity criterion, the reducts based on cost minimum criterion can obtain the lowest decision costs and the largest approximation qualities. This study suggests potential application areas and new research trends concerning rough set theory.
Agha Mohammad Ali Kermani, Mehrdad; Fatemi Ardestani, Seyed Farshad; Aliahmadi, Alireza; Barzinpour, Farnaz
2017-01-01
Influence maximization deals with identification of the most influential nodes in a social network given an influence model. In this paper, a game theoretic framework is developed that models a competitive influence maximization problem. A novel competitive influence model is additionally proposed that incorporates user heterogeneity, message content, and network structure. The proposed game-theoretic model is solved using Nash Equilibrium in a real-world dataset. It is shown that none of the well-known strategies are stable and at least one player has the incentive to deviate from the proposed strategy. Moreover, violation of Nash equilibrium strategy by each player leads to their reduced payoff. Contrary to previous works, our results demonstrate that graph topology, as well as the nodes' sociability and initial tendency measures have an effect on the determination of the influential node in the network.
Theoretical modelling of semiconductor surfaces microscopic studies of electrons and photons
Srivastava, G P
1999-01-01
The state-of-the-art theoretical studies of ground state properties, electronic states and atomic vibrations for bulk semiconductors and their surfaces by the application of the pseudopotential method are discussed. Studies of bulk and surface phonon modes have been extended by the application of the phenomenological bond charge model. The coverage of the material, especially of the rapidly growing and technologically important topics of surface reconstruction and chemisorption, is up-to-date and beyond what is currently available in book form. Although theoretical in nature, the book provides
INTRODUCTION: Theoretical Models as Mass Media Practice: Perspectives from the West
DEFF Research Database (Denmark)
Thomsen, Line
2007-01-01
What is journalism? How does it exist and why? How does journalism define itself and in what ways can we make use of looking theoretically at the practice of it? These were the central themes of our workshop; Theoretical Models as Mass Media Practice held at the ‘Minding the Gap’ conference...... at Reuters Institute in May 2007, from which this collection of papers has been selected. As with the other workshops during the conference, the majority of our panellists were themselves once media practitioners. It is my opinion that this background and inside knowledge of the field in itself can provide...
Theoretic model of myocardial revascularization by far infrared laser and experimental validation
Luo, Le; Chen, Xing; Zhang, Ting; Zong, Ren-He; Deng, Shan-Xi
2009-03-01
A theoretic model of myocardial revascularization by a far infrared laser has been established and a quantificational relationship between the aperture of laser channel and parameters of laser has been concluded according to thermodynamics and the law of interaction of far infrared laser and myocardium. The experiment of a carbon dioxide laser revascularization in porcine myocardium has been done for different laser powers and irradiation time. The relative errors between experimental result and theoretic computation are from 13% to 22%. The reasons that cause the errors have been studied in detail.
The LIFE Model: A Meta-Theoretical Conceptual Map for Applied Positive Psychology
Lomas, Tim; Hefferon, Kate; Ivtzan, Itai
2014-01-01
Since its emergence in 1998, positive psychology has flourished. Among its successes is the burgeoning field of applied positive psychology (APP), involving interventions to promote wellbeing. However, the remit of APP is currently unclear. As such, we offer a meta-theoretical conceptual map delineating the terrain that APP might conceivably cover, namely, the Layered Integrated Framework Example model. The model is based on Wilber’s (J Conscious Stud 4(1):71–92, 1997) Integral Framework, whi...
Family Leadership: Constructing and Testing a Theoretical Model of Family Well-Being
Galbraith, Kevin A.
2000-01-01
Leadership in organizational contexts has received considerable attention through the years. Although much is known about what constitutes effective leadership in an organizational setting, little is known about leadership as it pertains to the family. To address this limitation, a theoretical model of family leadership was developed. This model draws on transformational leadership and proposes five areas in which leadership could be carried out to lead and strengthen the family unit. These f...
Decision support models for solid waste management: Review and game-theoretic approaches
Energy Technology Data Exchange (ETDEWEB)
Karmperis, Athanasios C., E-mail: athkarmp@mail.ntua.gr [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece); Army Corps of Engineers, Hellenic Army General Staff, Ministry of Defence (Greece); Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece)
2013-05-15
Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed.
Establishment and validation for the theoretical model of the vehicle airbag
Zhang, Junyuan; Jin, Yang; Xie, Lizhe; Chen, Chao
2015-05-01
The current design and optimization of the occupant restraint system (ORS) are based on numerous actual tests and mathematic simulations. These two methods are overly time-consuming and complex for the concept design phase of the ORS, though they're quite effective and accurate. Therefore, a fast and directive method of the design and optimization is needed in the concept design phase of the ORS. Since the airbag system is a crucial part of the ORS, in this paper, a theoretical model for the vehicle airbag is established in order to clarify the interaction between occupants and airbags, and further a fast design and optimization method of airbags in the concept design phase is made based on the proposed theoretical model. First, the theoretical expression of the simplified mechanical relationship between the airbag's design parameters and the occupant response is developed based on classical mechanics, then the momentum theorem and the ideal gas state equation are adopted to illustrate the relationship between airbag's design parameters and occupant response. By using MATLAB software, the iterative algorithm method and discrete variables are applied to the solution of the proposed theoretical model with a random input in a certain scope. And validations by MADYMO software prove the validity and accuracy of this theoretical model in two principal design parameters, the inflated gas mass and vent diameter, within a regular range. This research contributes to a deeper comprehension of the relation between occupants and airbags, further a fast design and optimization method for airbags' principal parameters in the concept design phase, and provides the range of the airbag's initial design parameters for the subsequent CAE simulations and actual tests.
Directory of Open Access Journals (Sweden)
Maurice H. ter Beek
2015-04-01
Full Text Available We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLan with action rates, which specify the likelihood of exhibiting particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLan allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLan semantics based on discrete-time Markov chains. The Maude implementation of PFLan is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning and the expected average cost of products.
DEFF Research Database (Denmark)
ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto
2015-01-01
We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting...... particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLAN) allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLAN semantics based on discrete-time Markov chains....... The Maude implementation of PFLAN is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning) and the expected average...
Energy Technology Data Exchange (ETDEWEB)
Ahlroth, S.
2001-01-01
This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO{sub 2} and NO{sub x} emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner.
Qin, Ning; Wen, John Z.; Ren, Carolyn L.
2017-04-01
This is the second part of a two-part study on a partially miscible liquid-liquid flow (carbon dioxide and deionized water) that is highly pressurized and confined in a microfluidic T-junction. In the first part of this study, we reported experimental observations of the development of flow regimes under various flow conditions and the quantitative characteristics of the drop flow including the drop length, after-generation drop speed, and periodic spacing development between an emerging drop and the newly produced one. Here in part II we provide theoretical justifications to our quantitative studies on the drop flow by considering (1) C O2 hydration at the interface with water, (2) the diffusion-controlled dissolution of C O2 molecules in water, and (3) the diffusion distance of the dissolved C O2 molecules. Our analyses show that (1) the C O2 hydration at the interface is overall negligible, (2) a saturation scenario of the dissolved C O2 molecules in the vicinity of the interface will not be reached within the contact time between the two fluids, and (3) molecular diffusion does play a role in transferring the dissolved molecules, but the diffusion distance is very limited compared with the channel geometry. In addition, mathematical models for the drop length and the drop spacing are developed based on the observations in part I, and their predictions are compared to our experimental results.
Directory of Open Access Journals (Sweden)
Stefan Melanie I
2010-06-01
Full Text Available Abstract Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to
Theoretical modelling of photoactive molecular systems: insights using the Density Functional Theory
Energy Technology Data Exchange (ETDEWEB)
Ciofini, I.; Adamo, C. [Ecole Nationale Superieure de Chimie de Paris, Lab. d' Electrochimie et Chimie Analytique, CNRS UMR 7575, 75 - Paris (France); Laine, Ph.P. [Universite Rene-Descartes, Lab. de Chimie et Biochimie Pharmacologiques et Toxicologiques, CNRS UMR 8601, 75 - Paris (France); Bedioui, F. [Ecole Nationale Superieure de Chimie de Paris, Lab. de Pharmacologie Chimique et Genetique, CNRS FRE 2463 and INSERM U 640, 75 - Paris (France); Daul, C.A. [Fribourg Univ., Dept. de Chimie (Switzerland)
2006-02-15
An account of the performance of a modern and efficient approach to Density Functional Theory (DFT) for the prediction of the photophysical behavior of a series of Ru(II) and Os(II) complexes is given. The time-dependent-DFT method was used to interpret their electronic spectra. Two different types of compounds have been analyzed: (1) a complex undergoing a light induced isomerization of one of its coordination bonds; (2) an inorganic dyads expected to undergo intramolecular photoinduced electron transfer to form a charge separated (CS) sate. Besides the noticeable quantitative agreement between computed and experimental absorption spectra, our results allow to clarify, by first principles, both the nature of the excited states and the photochemical behavior of these complex systems, thus underlying the predictive character of the theoretical approach. (authors)
Measuring and Managing Value Co-Creation Process: Overview of Existing Theoretical Models
Directory of Open Access Journals (Sweden)
Monika Skaržauskaitė
2013-08-01
Full Text Available Purpose — the article is to provide a holistic view on concept of value co-creation and existing models for measuring and managing it by conducting theoretical analysis of scientific literature sources targeting the integration of various approaches. Most important and relevant results of the literature study are presented with a focus on changed roles of organizations and consumers. This article aims at contributing theoretically to the research stream of measuring co-creation of value in order to gain knowledge for improvement of organizational performance and enabling new and innovative means of value creation. Design/methodology/approach. The nature of this research is exploratory – theoretical analysis and synthesis of scientific literature sources targeting the integration of various approaches was performed. This approach was chosen due to the absence of established theory on models of co-creation, possible uses in organizations and systematic overview of tools measuring/suggesting how to measure co-creation. Findings. While the principles of managing and measuring co-creation in regards of consumer motivation and involvement are widely researched, little attempt has been made to identify critical factors and create models dealing with organizational capabilities and managerial implications of value co-creation. Systematic analysis of literature revealed a gap not only in empirical research concerning organization’s role in co-creation process, but in theoretical and conceptual levels, too. Research limitations/implications. The limitations of this work as a literature review lies in its nature – the complete reliance on previously published research papers and the availability of these studies. For a deeper understanding of co-creation management and for developing models that can be used in real-life organizations, a broader theoretical, as well as empirical, research is necessary. Practical implications. Analysis of the
Developing a theoretical maintenance model for disordered eating in Type 1 diabetes.
Treasure, J; Kan, C; Stephenson, L; Warren, E; Smith, E; Heller, S; Ismail, K
2015-12-01
According to the literature, eating disorders are an increasing problem for more than a quarter of people with Type 1 diabetes and they are associated with accentuated diabetic complications. The clinical outcomes in this group when given standard eating disorder treatments are disappointing. The Medical Research Council guidelines for developing complex interventions suggest that the first step is to develop a theoretical model. To review existing literature to build a theoretical maintenance model for disordered eating in people with Type 1 diabetes. The literature in diabetes relating to models of eating disorder (Fairburn's transdiagnostic model and the dual pathway model) and food addiction was examined and assimilated. The elements common to all eating disorder models include weight/shape concern and problems with mood regulation. The predisposing traits of perfectionism, low self-esteem and low body esteem and the interpersonal difficulties from the transdiagnostic model are also relevant to diabetes. The differences include the use of insulin mismanagement to compensate for breaking eating rules and the consequential wide variations in plasma glucose that may predispose to 'food addiction'. Eating disorder symptoms elicit emotionally driven reactions and behaviours from others close to the individual affected and these are accentuated in the context of diabetes. The next stage is to test the assumptions within the maintenance model with experimental medicine studies to facilitate the development of new technologies aimed at increasing inhibitory processes and moderating environmental triggers. © 2015 The Authors. Diabetic Medicine © 2015 Diabetes UK.
Schuberth, Bernhard S. A.
2017-04-01
One of the major challenges in studies of Earth's deep mantle is to bridge the gap between geophysical hypotheses and observations. The biggest dataset available to investigate the nature of mantle flow are recordings of seismic waveforms. On the other hand, numerical models of mantle convection can be simulated on a routine basis nowadays for earth-like parameters, and modern thermodynamic mineralogical models allow us to translate the predicted temperature field to seismic structures. The great benefit of the mineralogical models is that they provide the full non-linear relation between temperature and seismic velocities and thus ensure a consistent conversion in terms of magnitudes. This opens the possibility for quantitative assessments of the theoretical predictions. The often-adopted comparison between geodynamic and seismic models is unsuitable in this respect owing to the effects of damping, limited resolving power and non-uniqueness inherent to tomographic inversions. The most relevant issue, however, is related to wavefield effects that reduce the magnitude of seismic signals (e.g., traveltimes of waves), a phenomenon called wavefront healing. Over the past couple of years, we have developed an approach that takes the next step towards a quantitative assessment of geodynamic models and that enables us to test the underlying geophysical hypotheses directly against seismic observations. It is based solely on forward modelling and warrants a physically correct treatment of the seismic wave equation without theoretical approximations. Fully synthetic 3-D seismic wavefields are computed using a spectral element method for 3-D seismic structures derived from mantle flow models. This way, synthetic seismograms are generated independent of any seismic observations. Furthermore, through the wavefield simulations, it is possible to relate the magnitude of lateral temperature variations in the dynamic flow simulations directly to body-wave traveltime residuals. The
Energy transfer in photosynthesis: experimental insights and quantitative models
van Grondelle, R.; Novoderezhkin, V.
2006-01-01
We overview experimental and theoretical studies of energy transfer in the photosynthetic light-harvesting complexes LH1, LH2, and LHCII performed during the past decade since the discovery of high-resolution structure of these complexes. Experimental findings obtained with various spectroscopic
Comparison of selected theoretical models of bubble formation and experimental results
Directory of Open Access Journals (Sweden)
Rząsa Mariusz R.
2014-06-01
Full Text Available Designers of all types of equipment applied in oxygenation and aeration need to get to know the mechanism behind the gas bubble formation. This paper presents a measurement method used for determination of parameters of bubbles forming at jet attachment from which the bubles are displaced upward. The measuring system is based on an optical tomograph containing five projections. An image from the tomograph contains shapes of the forming bubbles and determine their volumes and formation rate. Additionally, this paper presents selected theoretical models known from literature. The measurement results have been compared with simple theoretical models predictions. The paper also contains a study of the potential to apply the presented method for determination of bubble structures and observation of intermediate states.
Theoretical model for the mechanical behavior of prestressed beams under torsion
Directory of Open Access Journals (Sweden)
Sérgio M.R. Lopes
2014-12-01
Full Text Available In this article, a global theoretical model previously developed and validated by the authors for reinforced concrete beams under torsion is reviewed and corrected in order to predict the global behavior of beams under torsion with uniform longitudinal prestress. These corrections are based on the introduction of prestress factors and on the modification of the equilibrium equations in order to incorporate the contribution of the prestressing reinforcement. The theoretical results obtained with the new model are compared with some available results of prestressed concrete (PC beams under torsion found in the literature. The results obtained in this study validate the proposed computing procedure to predict the overall behavior of PC beams under torsion.
A Theoretical Bayesian Game Model for the Vendor-Retailer Relation
Directory of Open Access Journals (Sweden)
Emil CRIŞAN
2012-06-01
Full Text Available We consider an equilibrated supply chain with two equal partners, a vendor and a retailer (also called newsboy type products supply chain. The actions of each partner are driven by profit. Given the fact that at supply chain level are specific external influences which affect the costs and concordant the profit, we use a game theoretic model for the situation, considering costs and demand. At theoretical level, symmetric and asymmetric information patterns are considered for this situation. There are at every supply chain’s level situations when external factors (such as inflation, raw-material rate influence the situation of each partner even if the information is well shared within the chain. The model we propose considers both the external factors and asymmetric information within a supply chain.
Boonlert Watjatrakul
2011-01-01
Mobile marketing through mobile messaging service has highly impressive growth as it enables e-business firms to communicate with their customers effectively. Educational institutions hence start using this service to enhance communication with their students. Previous studies, however, have limited understanding of applying mobile messaging service in education. This study proposes a theoretical model to understand the drivers of students- intentions to use the universit...
A Transformative Model for Undergraduate Quantitative Biology Education
Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.
2010-01-01
The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3)…
Theoretical modeling and experimental analysis of solar still integrated with evacuated tubes
Panchal, Hitesh; Awasthi, Anuradha
2017-06-01
In this present research work, theoretical modeling of single slope, single basin solar still integrated with evacuated tubes has been performed based on energy balance equations. Major variables like water temperature, inner glass cover temperature and distillate output has been computed based on theoretical modeling. The experimental setup has been made from locally available materials and installed at Gujarat Power Engineering and Research Institute, Mehsana, Gujarat, India (23.5880°N, 72.3693°E) with 0.04 m depth during 6 months of time interval. From the series of experiments, it is found considerable increment in average distillate output of a solar still when integrated with evacuated tubes not only during daytime but also from night time. In all experimental cases, the correlation of coefficient (r) and root mean square percentage deviation of theoretical modeling and experimental study found good agreement with 0.97 < r < 0.98 and 10.22 < e < 38.4% respectively.
The interrogation decision-making model: A general theoretical framework for confessions.
Yang, Yueran; Guyll, Max; Madon, Stephanie
2017-02-01
This article presents a new model of confessions referred to as the interrogation decision-making model. This model provides a theoretical umbrella with which to understand and analyze suspects' decisions to deny or confess guilt in the context of a custodial interrogation. The model draws upon expected utility theory to propose a mathematical account of the psychological mechanisms that not only underlie suspects' decisions to deny or confess guilt at any specific point during an interrogation, but also how confession decisions can change over time. Findings from the extant literature pertaining to confessions are considered to demonstrate how the model offers a comprehensive and integrative framework for organizing a range of effects within a limited set of model parameters. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Modeling child-based theoretical reading constructs with struggling adult readers.
Nanda, Alice O; Greenberg, Daphne; Morris, Robin
2010-01-01
This study examined whether measurement constructs behind reading-related tests for struggling adult readers are similar to what is known about measurement constructs for children. The sample included 371 adults reading between the third-and fifth-grade levels, including 127 men and 153 English speakers of other languages. Using measures of skills and subskills, confirmatory factor analyses were conducted to test child-based theoretical measurement models of reading: an achievement model of reading skills, a core deficit model of reading subskills, and an integrated model containing achievement and deficit variables. Although the findings present the best measurement models, the contribution of this article is the description of the difficulties encountered when applying child-based assumptions to developing measurement models for struggling adult readers.
Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure
Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...
Marasulov, Akhmat; Saipov, Amangeldi; ?rymbayeva, Kulimkhan; Zhiyentayeva, Begaim; Demeuov, Akhan; Konakbaeva, Ulzhamal; Bekbolatova, Akbota
2016-01-01
The aim of the study is to examine the methodological-theoretical construction bases for development mechanism of an integrated model for a specialist's training and teacher's conceptual-theoretical activity. Using the methods of generalization of teaching experience, pedagogical modeling and forecasting, the authors determine the urgent problems…
Hansson, Lena; Hansson, Örjan; Juter, Kristina; Redfors, Andreas
2015-01-01
This article discusses the role of mathematics during physics lessons in upper-secondary school. Mathematics is an inherent part of theoretical models in physics and makes powerful predictions of natural phenomena possible. Ability to use both theoretical models and mathematics is central in physics. This paper takes as a starting point that the…
Energy Technology Data Exchange (ETDEWEB)
Mardinoglu, Adil [Telecommunications Software and Systems Group (TSSG), Waterford Institute of Technology, Waterford (Ireland); Cregg, P.J.; Murphy, Kieran [Materials Characterisation and Processing Group, SEAM Centre, Waterford Institute of Technology, Waterford (Ireland); Curtin, Maurice [Trinity Centre for Bioengineering, Trinity College, Dublin 2 (Ireland); Prina-Mello, Adriele, E-mail: prinamea@tcd.i [Trinity Centre for Bioengineering, Trinity College, Dublin 2 (Ireland); Centre for Research on Adaptive Nanostructures and Nanodevices (CRANN), Trinity College, Dublin 2 (Ireland)
2011-02-15
The magnetisable stent assisted magnetic targeted drug delivery system in a physiologically stretched vessel is considered theoretically. The changes in the mechanical behaviour of the vessel are analysed under the influence of mechanical forces generated by blood pressure. In this 2D mathematical model a ferromagnetic, coiled wire stent is implanted to aid collection of magnetic drug carrier particles in an elastic tube, which has similar mechanical properties to the blood vessel. A cyclic mechanical force is applied to the elastic tube to mimic the mechanical stress and strain of both the stent and vessel while in the body due to pulsatile blood circulation. The magnetic dipole-dipole and hydrodynamic interactions for multiple particles are included and agglomeration of particles is also modelled. The resulting collection efficiency of the mathematical model shows that the system performance can decrease by as much as 10% due to the effects of the pulsatile blood circulation. - Research highlights: Theoretical modelling of magnetic drug targeting on a physiologically stretched stent-vessel system. Cyclic mechanical force applied to mimic the mechanical stress and strain of both stent and vessel. The magnetic dipole-dipole and hydrodynamic interactions for multiple particles is modelled. Collection efficiency of the mathematical model is calculated for different physiological blood flow and magnetic field strength.
Grace, J.B.; Bollen, K.A.
2008-01-01
Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.
A theoretical model of co-worker responses to work reintegration processes.
Dunstan, Debra A; Maceachen, Ellen
2014-06-01
Emerging research has shown that co-workers have a significant influence on the return-to-work outcomes of partially fit ill or injured employees. By drawing on theoretical findings from the human resource and wider behavioral sciences literatures, our goal was to formulate a theoretical model of the influences on and outcomes of co-worker responses within work reintegration. From a search of 15 data bases covering the social sciences, business and medicine, we identified articles containing models of the factors that influence co-workers' responses to disability accommodations; and, the nature and impact of co-workers' behaviors on employee outcomes. To meet our goal, we combined identified models to form a comprehensive model of the relevant factors and relationships. Internal consistency and externally validity were assessed. The combined model illustrates four key findings: (1) co-workers' behaviors towards an accommodated employee are influenced by attributes of that employee, the illness or injury, the co-worker themselves, and the work environment; (2) the influences-behaviour relationship is mediated by perceptions of the fairness of the accommodation; (3) co-workers' behaviors affect all work reintegration outcomes; and (4) co-workers' behaviours can vary from support to antagonism and are moderated by type of support required, the social intensity of the job, and the level of antagonism. Theoretical models from the wider literature are useful for understanding the impact of co-workers on the work reintegration process. To achieve optimal outcomes, co-workers need to perceive the arrangements as fair. Perceptions of fairness might be supported by co-workers' collaborative engagement in the planning, monitoring and review of work reintegration activities.
Software for energy modelling: a theoretical basis for improvements in the user interface
Energy Technology Data Exchange (ETDEWEB)
Siu, Y.L.
1989-09-01
A philosophical critique of the relationships between theory, knowledge and practice for a range of existing energy modelling styles is presented. In particular, Habermas's ideas are invoked regarding the three spheres of cognitive interest (i.e. technical, practical and emancipatory) and three levels of understanding of knowledge, the construction of an 'ideal speech situation', and the theory of communicative competence and action. These are adopted as a basis for revealing shortcomings of a representative selection of existing computer-based energy modelling styles, and as a springboard for constructing a new theoretical approach. (author).
Saunders, James A
2015-03-01
Fundamental Christianity and psychology are frequently viewed as incompatible pursuits. However, proponents of the integrationist movement posit that pastoral counselors can utilize principles from psychology if they adopt the premise that all truth is God's truth. Assuming this perspective, Cognitive-Existential Family Therapy (CEFT) - a theoretical integration model compatible with Christian fundamentalism - is proposed. The philosophical assumptions and models of personality, health, and abnormality are explored. Additionally, the article provides an overview of the therapeutic process. © The Author(s) 2015 Reprints and permissions:sagepub.co.uk/journalsPermissions.nav.
Dou, Kaili; Yu, Ping; Deng, Ning; Liu, Fang; Guan, YingPing; Li, Zhenye; Ji, Yumeng; Du, Ningkai; Lu, Xudong; Duan, Huilong
2017-12-06
Chronic disease patients often face multiple challenges from difficult comorbidities. Smartphone health technology can be used to help them manage their conditions only if they accept and use the technology. The aim of this study was to develop and test a theoretical model to predict and explain the factors influencing patients' acceptance of smartphone health technology for chronic disease management. Multiple theories and factors that may influence patients' acceptance of smartphone health technology have been reviewed. A hybrid theoretical model was built based on the technology acceptance model, dual-factor model, health belief model, and the factors identified from interviews that might influence patients' acceptance of smartphone health technology for chronic disease management. Data were collected from patient questionnaire surveys and computer log records about 157 hypertensive patients' actual use of a smartphone health app. The partial least square method was used to test the theoretical model. The model accounted for .412 of the variance in patients' intention to adopt the smartphone health technology. Intention to use accounted for .111 of the variance in actual use and had a significant weak relationship with the latter. Perceived ease of use was affected by patients' smartphone usage experience, relationship with doctor, and self-efficacy. Although without a significant effect on intention to use, perceived ease of use had a significant positive influence on perceived usefulness. Relationship with doctor and perceived health threat had significant positive effects on perceived usefulness, countering the negative influence of resistance to change. Perceived usefulness, perceived health threat, and resistance to change significantly predicted patients' intentions to use the technology. Age and gender had no significant influence on patients' acceptance of smartphone technology. The study also confirmed the positive relationship between intention to use
Emergence of structured interactions: from a theoretical model to pragmatic robotics.
Revel, A; Andry, P
2009-03-01
In this article, we present two neural architectures for the control of socially interacting robots. Beginning with a theoretical model of interaction inspired by developmental psychology, biology and physics, we present two sub-cases of the model that can be interpreted as "turn-taking" and "synchrony" at the behavioral level. These neural architectures are both detailed and tested in simulation. A robotic experiment is even presented for the "turn-taking" case. We then discuss the interest of such behaviors for the development of further social abilities in robots.
Dou, Kaili; Yu, Ping; Liu, Fang; Guan, YingPing; Li, Zhenye; Ji, Yumeng; Du, Ningkai; Lu, Xudong; Duan, Huilong
2017-01-01
Background Chronic disease patients often face multiple challenges from difficult comorbidities. Smartphone health technology can be used to help them manage their conditions only if they accept and use the technology. Objective The aim of this study was to develop and test a theoretical model to predict and explain the factors influencing patients’ acceptance of smartphone health technology for chronic disease management. Methods Multiple theories and factors that may influence patients’ acceptance of smartphone health technology have been reviewed. A hybrid theoretical model was built based on the technology acceptance model, dual-factor model, health belief model, and the factors identified from interviews that might influence patients’ acceptance of smartphone health technology for chronic disease management. Data were collected from patient questionnaire surveys and computer log records about 157 hypertensive patients’ actual use of a smartphone health app. The partial least square method was used to test the theoretical model. Results The model accounted for .412 of the variance in patients’ intention to adopt the smartphone health technology. Intention to use accounted for .111 of the variance in actual use and had a significant weak relationship with the latter. Perceived ease of use was affected by patients’ smartphone usage experience, relationship with doctor, and self-efficacy. Although without a significant effect on intention to use, perceived ease of use had a significant positive influence on perceived usefulness. Relationship with doctor and perceived health threat had significant positive effects on perceived usefulness, countering the negative influence of resistance to change. Perceived usefulness, perceived health threat, and resistance to change significantly predicted patients’ intentions to use the technology. Age and gender had no significant influence on patients’ acceptance of smartphone technology. The study also
Team Resilience as a Second-Order Emergent State: A Theoretical Model and Research Directions
Bowers, Clint; Kreutzer, Christine; Cannon-Bowers, Janis; Lamb, Jerry
2017-01-01
Resilience has been recognized as an important phenomenon for understanding how individuals overcome difficult situations. However, it is not only individuals who face difficulties; it is not uncommon for teams to experience adversity. When they do, they must be able to overcome these challenges without performance decrements.This manuscript represents a theoretical model that might be helpful in conceptualizing this important construct. Specifically, it describes team resilience as a second-order emergent state. We also include research propositions that follow from the model. PMID:28861013
Theoretical Base of the PUCK-Model with Application to Foreign Exchange Markets
Takayasu, Misako; Watanabe, Kota; Mizuno, Takayuki; Takayasu, Hideki
We analyze statistical properties of a random walker in a randomly changing potential function called the PUCK model both theoretically and numerically. In this model the center of the potential function moves with the moving average of the random walker's trace, and the potential function is given by a quadratic function with its curvature slowly changing around zero. By tuning several parameters the basic statistical properties fit nicely with those of real financial market prices, such as power law price change distribution, very short decay of autocorrelation of price changes, long tails in autocorrelation of the square of price changes and abnormal diffusion in short time scale.
Empirical, theoretical, and practical advantages of the HEXACO model of personality structure.
Ashton, Michael C; Lee, Kibeom
2007-05-01
The authors argue that a new six-dimensional framework for personality structure--the HEXACO model--constitutes a viable alternative to the well-known Big Five or five-factor model. The new model is consistent with the cross-culturally replicated finding of a common six-dimensional structure containing the factors Honesty-Humility (H), Emotionality (E), eExtraversion (X), Agreeableness (A), Conscientiousness (C), and Openness to Experience (O). Also, the HEXACO model predicts several personality phenomena that are not explained within the B5/FFM, including the relations of personality factors with theoretical biologists' constructs of reciprocal and kin altruism and the patterns of sex differences in personality traits. In addition, the HEXACO model accommodates several personality variables that are poorly assimilated within the B5/FFM.
Decision support models for solid waste management: review and game-theoretic approaches.
Karmperis, Athanasios C; Aravossis, Konstantinos; Tatsiopoulos, Ilias P; Sotirchos, Anastasios
2013-05-01
This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost-benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
S. W. H. Cowley
2008-09-01
Full Text Available The first simultaneous observations of fields and plasmas in Saturn's high-latitude magnetosphere and UV images of the conjugate auroral oval were obtained by the Cassini spacecraft and the Hubble Space Telescope (HST in January 2007. These data have shown that the southern auroral oval near noon maps to the dayside cusp boundary between open and closed field lines, associated with a major layer of upward-directed field-aligned current (Bunce et al., 2008. The results thus support earlier theoretical discussion and quantitative modelling of magnetosphere-ionosphere coupling at Saturn (Cowley et al., 2004, that suggests the oval is produced by electron acceleration in the field-aligned current layer required by rotational flow shear between strongly sub-corotating flow on open field lines and near-corotating flow on closed field lines. Here we quantitatively compare these modelling results (the "CBO" model with the Cassini-HST data set. The comparison shows good qualitative agreement between model and data, the principal difference being that the model currents are too small by factors of about five, as determined from the magnetic perturbations observed by Cassini. This is suggested to be principally indicative of a more highly conducting summer southern ionosphere than was assumed in the CBO model. A revised model is therefore proposed in which the height-integrated ionospheric Pedersen conductivity is increased by a factor of four from 1 to 4 mho, together with more minor adjustments to the co-latitude of the boundary, the flow shear across it, the width of the current layer, and the properties of the source electrons. It is shown that the revised model agrees well with the combined Cassini-HST data, requiring downward acceleration of outer magnetosphere electrons through a ~10 kV potential in the current layer at the open-closed field line boundary to produce an auroral oval of ~1° width with UV emission intensities of a few tens of kR.
Effective Permittivity of Biological Tissue: Comparison of Theoretical Model and Experiment
Directory of Open Access Journals (Sweden)
Li Gun
2017-01-01
Full Text Available Permittivity of biological tissue is a critical issue for studying the biological effects of electromagnetic fields. Many theories and experiments were performed to measure or explain the permittivity characteristics in biological tissue. In this paper, we investigate the permittivity parameter in biological tissues via theoretical and experimental analysis. Firstly, we analyze the permittivity characteristic in tissue by using theories on composite material. Secondly, typical biological tissues, such as blood, fat, liver, and brain, are measured by HP4275A Multi-Frequency LCR Meter within 10 kHz to 10 MHz. Thirdly, experimental results are compared with the Bottcher-Bordewijk model, the Skipetrov equation, and the Maxwell-Gannett theory. From the theoretical perspective, blood and fat are regarded as the composition of liver and brain because of the high permittivity in blood and the opposite in fat. Volume fraction of blood in liver and brain is analyzed theoretically, and the applicability and the limitation of the models are also discussed. These results benefit further study on local biological effects of electromagnetic fields.
Theoretical Model of God: The Key to Correct Exploration of the Universe
Kalanov, Temur Z.
2007-04-01
The problem of the correct approach to exploration of the Universe cannot be solved if there is no solution of the problem of existence of God (Creator, Ruler) in science. In this connection, theoretical proof of existence of God is proposed. The theoretical model of God -- as scientific proof of existence of God -- is the consequence of the system of the formulated axioms. The system of the axioms contains, in particular, the following premises: (1) all objects formed (synthesized) by man are characterized by the essential property: namely, divisibility into aspects; (2) objects which can be mentally divided into aspects are objects formed (synthesized); (3) the system ``Universe'' is mentally divided into aspects. Consequently, the Universe represents the system formed (synthesized); (4) the theorem of existence of God (i.e. Absolute, Creator, Ruler) follows from the principle of logical completeness of system of concepts: if the formed (synthesized) system ``Universe'' exists, then God exists as the Absolute, the Creator, the Ruler of essence (i.e. information) and phenomenon (i.e. material objects). Thus, the principle of existence of God -- the content of the theoretical model of God -- must be a starting-point and basis of correct gnosiology and science of 21 century.
Sample correlations of infinite variance time series models: an empirical and theoretical study
Directory of Open Access Journals (Sweden)
Jason Cohen
1998-01-01
Full Text Available When the elements of a stationary ergodic time series have finite variance the sample correlation function converges (with probability 1 to the theoretical correlation function. What happens in the case where the variance is infinite? In certain cases, the sample correlation function converges in probability to a constant, but not always. If within a class of heavy tailed time series the sample correlation functions do not converge to a constant, then more care must be taken in making inferences and in model selection on the basis of sample autocorrelations. We experimented with simulating various heavy tailed stationary sequences in an attempt to understand what causes the sample correlation function to converge or not to converge to a constant. In two new cases, namely the sum of two independent moving averages and a random permutation scheme, we are able to provide theoretical explanations for a random limit of the sample autocorrelation function as the sample grows.
The Impact of Refinancing on the Bank’s Credit Portfolio: Theoretical Aspects and Modelling
Directory of Open Access Journals (Sweden)
Koptiukh Olena G.
2017-05-01
Full Text Available The article for the first time proposes a theoretical-predictive mathematical model of the impact of refinancing on the banks’ credit portfolio in Ukraine from 2006 to 2016. The study is based on the theoretical assumptions about the use of the banks’ free reserve as an indicator to attract refinancing both in the interbank credit market and on the part of the central bank, followed by its impact on the credit portfolio. Based on the results obtained, conclusions have been made on the evaluation of actions of the National Bank of Ukraine, using refinancing as a tool for stabilizing bank liquidity; a number of concrete proposals in order to improve this process has been elaborated. Using the regression-correlative multi-factor and pair analysis, reliable estimates of the impact of refinancing on the credit portfolio of banks have been calculated, as well as the predictive trends until 2018.
Droplet size in flow: Theoretical model and application to polymer blends
Fortelný, Ivan; JÅ¯za, Josef
2017-05-01
The paper is focused on prediction of the average droplet radius, R, in flowing polymer blends where the droplet size is determined by dynamic equilibrium between the droplet breakup and coalescence. Expressions for the droplet breakup frequency in systems with low and high contents of the dispersed phase are derived using available theoretical and experimental results for model blends. Dependences of the coalescence probability, Pc, on system parameters, following from recent theories, is considered and approximate equation for Pc in a system with a low polydispersity in the droplet size is proposed. Equations for R in systems with low and high contents of the dispersed phase are derived. Combination of these equations predicts realistic dependence of R on the volume fraction of dispersed droplets, φ. Theoretical prediction of the ratio of R to the critical droplet radius at breakup agrees fairly well with experimental values for steadily mixed polymer blends.
Learned helplessness as an interacting variable with self-care agency: testing a theoretical model.
McDermott, M A
1993-01-01
This article describes the theoretical development and initial testing of a model outlining the interaction of the concepts of self-care agency and learned helplessness in healthy working adults. Orem's theory of self-care and the reformulated learned helplessness theory are discussed as the theoretical basis for the study. The self-care agency conditioning factors, age and gender, were examined for relationships to the main variables. In a descriptive, correlational design, the hypothesis, that learned helplessness was inversely related to self-care agency, was supported (r = -.57). Neither age nor gender was related to main variables in the population. Implications for nursing research, self-care theory clarification, and nursing practice are discussed.
Designing m-learning for junior registrars--activation of a theoretical model of clinical knowledge.
Kanstrup, Anne Marie; Boye, Niels; Nøhr, Christian
2007-01-01
The MINI-project aims at supporting junior registrars in the learning process of how to utilize their theoretical knowledge from Medical School in everyday clinical reasoning and practice. Due to the nature of the work--concurrent moving, learning and producing--we designed an m-learning application. This paper introduces the possibilities and challenges for design of the m-learning application based on a) analytical findings on learning and mobility as derived from the design case--an emergency medical ward b) theoretical perspectives on medical knowledge, and c) presentation of the design of an m-learning application. The design process was based on user-driven innovation and the paper discusses considerations on how to combine user-drive and generic models.
Energy Technology Data Exchange (ETDEWEB)
Mutarelli, Rita de Cassia; Lima, Ana Cecilia de Souza; Sabundjian, Gaiane, E-mail: rmutarelli@gmail.com, E-mail: aclima@ipen.br, E-mail: gdjian@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2015-07-01
Social responsibility has been one of the great discussions in institutional management, and that is an important variable in the strategy and performance of the institutions. The Instituto de Pesquisas Energeticas e Nucleares (IPEN) has worked for the development of environmental and social issues, converging mainly to the benefit of the population. The theory that guides the social responsibility practices is always difficult to measure for several reasons. One reason for this difficulty is that social responsibility involves a variety of issues that are converted in rights, obligations and expectations of different audiences that could be internal and external to the organization. In addition, the different understanding of the institutions about social and environmental issues is another source of complexity. Based on the study context including: the topic being researched, the chosen institute and the questions resulting from the research, the aim of this paper is to propose a theoretical model to describe and analyze the social responsibility of IPEN. The main contribution of this study is to develop a model that integrates the dimensions of social responsibility. These dimensions - also called constructs - are composed of indexes and indicators that were previously used in various contexts of empirical research, combined with the theoretical and conceptual review of social responsibility. The construction of the proposed theoretical model was based on the research of various methodologies and various indicators for measuring social responsibility. This model was statistically tested, analyzed, adjusted, and the end result is a consistent model to measure the perceived value of social responsibility of IPEN. This work could also be applied to other institutions. Moreover, it may be improved and become a tool that will serve as a thermometer to measure social and environmental issues, and will support decision making in various management processes. (author)
Polidori, G; Marreiro, A; Pron, H; Lestriez, P; Boyer, F C; Quinart, H; Tourbah, A; Taïar, R
2016-11-01
This article establishes the basics of a theoretical model for the constitutive law that describes the skin temperature and thermolysis heat losses undergone by a subject during a session of whole-body cryotherapy (WBC). This study focuses on the few minutes during which the human body is subjected to a thermal shock. The relationship between skin temperature and thermolysis heat losses during this period is still unknown and have not yet been studied in the context of the whole human body. The analytical approach here is based on the hypothesis that the skin thermal shock during a WBC session can be thermally modelled by the sum of both radiative and free convective heat transfer functions. The validation of this scientific approach and the derivation of temporal evolution thermal laws, both on skin temperature and dissipated thermal power during the thermal shock open many avenues of large scale studies with the aim of proposing individualized cryotherapy protocols as well as protocols intended for target populations. Furthermore, this study shows quantitatively the substantial imbalance between human metabolism and thermolysis during WBC, the explanation of which remains an open question. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lee, M. K.; Nisbet, J. S.
1975-01-01
Radio wave propagation predictions are described in which modern comprehensive theoretical ionospheric models are coupled with ray-tracing programs. In the computer code described, a network of electron density and collision frequency parameters along a band about the great circle path is calculated by specifying the transmitter and receiver geographic coordinates, time, the day number, and the 2800-MHz solar flux. The ray paths are calculated on specifying the frequency, mode, range of elevation angles, and range of azimuth angles from the great circle direction. The current program uses a combination of the Penn State MKI E and F region models and the Mitra-Rowe D and E region model. Application of the technique to the prediction of satellite to ground propagation and calculation of oblique incidence propagation paths and absorption are described. The implications of the study to the development of the next generation of ionospheric models are discussed.
Directory of Open Access Journals (Sweden)
F. T. Ademiluyi
2013-01-01
Full Text Available A mathematical model was developed for predicting the drying kinetics of spherical particles in a rotary dryer. Drying experiments were carried out by drying fermented ground cassava particles in a bench scale rotary dryer at inlet air temperatures of 115–230°C, air velocities of 0.83 m/s–1.55 m/s, feed mass of 50–500 g, drum drive speed of 8 rpm, and feed drive speed of 100 rpm to validate the model. The data obtained from the experiments were used to calculate the experimental moisture ratio which compared well with the theoretical moisture ratio calculated from the newly developed Abowei-Ademiluyi model. The comparisons and correlations of the results indicate that validation and performance of the established model are rather reasonable.
Theoretical Framework and Model Design for Beautiful Countryside Construction in China
Directory of Open Access Journals (Sweden)
ZHENG Xiang-qun
2015-04-01
Full Text Available In the context of China today, the process of beautiful countryside construction mainly imitates the patterns of‘urbanization’construction. However, this approach leads to the loss of countryside characteristics and the separation of agricultural culture. Therefore, it's urgent to carry out research of the theoretical framework and model design for beautiful countryside construction. In this paper, based on the analysis of the beautiful countryside construction connotation, the basic theory of beautiful countryside construction was summarized in three aspects: rural complex ecosystem model, multi-functionality of rural model and sustainable development evaluation model. The basic idea of the beautiful countryside construction mode was studied. The design method of beautiful countryside construction mode was proposed in three levels: planning, scheming and evaluating. The research results might offer scientific reference for improving the scientific and operational nature of beautiful countryside construction.
Directory of Open Access Journals (Sweden)
R. Du Preez
2003-10-01
Full Text Available This article is based on the conceptual theoretical model developed in Part 1 of this series of articles. The objective of this research is to identify female apparel consumer market segments on the basis of differentiating lifestyles, shopping orientation, cultural consciousness, store patronage and demographics. These profiles are discussed in full and the implications thereof for retailers, marketers and researchers are highlighted. A new conceptual model is proposed and recommendations are made for further research. Opsomming Hierdie artikel word gebaseer op die konseptuele teoretiese model wat reeds in Deel 1 van hierdie artikelreeks ontwikkel is. Die doel van hierdie navorsing is om marksegmente van vroue klere-kopers te identifiseer na aanleiding van hulle lewenstyle, kooporiëntasie, kulturele bewustheid, winkelvoorkeurgedrag en demografie. Hierdie profiele word volledig beskryf en die implikasies van die verskillende profiele vir kleinhandelaars, bemarkers en navorsers word uitgelig. ’n Nuwe konseptuele model word voorgestel en aanbevelings vir verdere navorsing word gemaak.
Caruso, Enrico; Gariboldi, Marzia; Sangion, Alessandro; Gramatica, Paola; Banfi, Stefano
2017-02-01
Here we report the synthesis of eleven new BODIPYs (14-24) characterized by the presence of an aromatic ring on the 8 (meso) position and of iodine atoms on the pyrrolic 2,6 positions. These molecules, together with twelve BODIPYs already reported by us (1-12), represent a large panel of BODIPYs showing different atoms or groups as substituent of the aromatic moiety. Two physico-chemical features ((1)O2 generation rate and lipophilicity), which can play a fundamental role in the outcome as photosensitizers, have been studied. The in vitro photo-induced cell-killing efficacy of 23 PSs was studied on the SKOV3 cell line treating the cells for 24h in the dark then irradiating for 2h with a green LED device (fluence 25.2J/cm(2)). The cell-killing efficacy was assessed with the MTT test and compared with that one of meso un-substituted compound (13). In order to understand the possible effect of the substituents, a predictive quantitative structure-activity relationship (QSAR) regression model, based on theoretical holistic molecular descriptors, was developed. The results clearly indicate that the presence of an aromatic ring is fundamental for an excellent photodynamic response, whereas the electronic effects and the position of the substituents on the aromatic ring do not influence the photodynamic efficacy. Copyright © 2017 Elsevier B.V. All rights reserved.
Theoretical Hill-Type Muscle and Stability: Numerical Model and Application
Directory of Open Access Journals (Sweden)
S. Schmitt
2013-01-01
Full Text Available The construction of artificial muscles is one of the most challenging developments in today’s biomedical science. The application of artificial muscles is focused both on the construction of orthotics and prosthetics for rehabilitation and prevention purposes and on building humanoid walking machines for robotics research. Research in biomechanics tries to explain the functioning and design of real biological muscles and therefore lays the fundament for the development of functional artificial muscles. Recently, the hyperbolic Hill-type force-velocity relation was derived from simple mechanical components. In this contribution, this theoretical yet biomechanical model is transferred to a numerical model and applied for presenting a proof-of-concept of a functional artificial muscle. Additionally, this validated theoretical model is used to determine force-velocity relations of different animal species that are based on the literature data from biological experiments. Moreover, it is shown that an antagonistic muscle actuator can help in stabilising a single inverted pendulum model in favour of a control approach using a linear torque generator.
Directory of Open Access Journals (Sweden)
Kimberly S. Young
2017-10-01
Full Text Available Although, it is not yet officially recognized as a clinical entity which is diagnosable, Internet Gaming Disorder (IGD has been included in section III for further study in the DSM-5 by the American Psychiatric Association (APA, 2013. This is important because there is increasing evidence that people of all ages, in particular teens and young adults, are facing very real and sometimes very severe consequences in daily life resulting from an addictive use of online games. This article summarizes general aspects of IGD including diagnostic criteria and arguments for the classification as an addictive disorder including evidence from neurobiological studies. Based on previous theoretical considerations and empirical findings, this paper examines the use of one recently proposed model, the Interaction of Person-Affect-Cognition-Execution (I-PACE model, for inspiring future research and for developing new treatment protocols for IGD. The I-PACE model is a theoretical framework that explains symptoms of Internet addiction by looking at interactions between predisposing factors, moderators, and mediators in combination with reduced executive functioning and diminished decision making. Finally, the paper discusses how current treatment protocols focusing on Cognitive-Behavioral Therapy for Internet addiction (CBT-IA fit with the processes hypothesized in the I-PACE model.
Young, Kimberly S.; Brand, Matthias
2017-01-01
Although, it is not yet officially recognized as a clinical entity which is diagnosable, Internet Gaming Disorder (IGD) has been included in section III for further study in the DSM-5 by the American Psychiatric Association (APA, 2013). This is important because there is increasing evidence that people of all ages, in particular teens and young adults, are facing very real and sometimes very severe consequences in daily life resulting from an addictive use of online games. This article summarizes general aspects of IGD including diagnostic criteria and arguments for the classification as an addictive disorder including evidence from neurobiological studies. Based on previous theoretical considerations and empirical findings, this paper examines the use of one recently proposed model, the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, for inspiring future research and for developing new treatment protocols for IGD. The I-PACE model is a theoretical framework that explains symptoms of Internet addiction by looking at interactions between predisposing factors, moderators, and mediators in combination with reduced executive functioning and diminished decision making. Finally, the paper discusses how current treatment protocols focusing on Cognitive-Behavioral Therapy for Internet addiction (CBT-IA) fit with the processes hypothesized in the I-PACE model. PMID:29104555
Modeling drug-melanin interaction with theoretical linear solvation energy relationships.
Lowrey, A H; Famini, G R; Loumbev, V; Wilson, L Y; Tosk, J M
1997-10-01
The affinity of drugs and other xenobiotic agents for melanin is a well-known phenomenon, often occurring with serious physiological consequences. For example, the interaction of anti-psychotic drugs with neuromelanin may play a pivotal role in the induction of extrapyramidal movement disorders associated with the chronic administration of phenothiazine and other neuroleptic agents. Little, however, is known about the complete nature of melanin-drug binding and the impact of these interactions on the physico-chemical properties of melanin. Data, such as binding affinities, can be analyzed using recently developed computational methods that combine mathematical models of chemical structure with statistical analysis. In particular, theoretical linear solvation energy relationships provide a convenient model for understanding and predicting biological, chemical, and physical properties. By using this modeling technique, drug-melanin binding of a set of 16 compounds has been analyzed with correlation analysis and a set of theoretical molecular parameters in order to better understand and characterize drug-melanin interactions. The resulting correlation equation supports a charge transfer model for drug-melanin complex formation and can also be used to estimate binding constants for related compounds.
A P-value model for theoretical power analysis and its applications in multiple testing procedures
Directory of Open Access Journals (Sweden)
Fengqing Zhang
2016-10-01
Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.
He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian
2013-09-01
The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents
Energy Technology Data Exchange (ETDEWEB)
Staub, Isabelle; Fredriksson, Anders; Outters, Nils [Golder Associates AB, Uppsala (Sweden)
2002-05-01
In the purpose of studying the possibilities of a Deep Repository for spent fuel, the Swedish Nuclear and Fuel Management Company (SKB) is currently planning for Site Investigations. Data collected from these Site Investigations are interpreted and analysed to achieve the full Site Description, which is built up of models from all the disciplines that are considered of importance for the Site Description. One of these models is the Rock Mechanical Descriptive Model,which would be developed for any site in hard crystalline rock, and is a combination and evaluation of the characterisation of rock mass by means of empirical relationships and a theoretical approach based on numerical modelling. The present report describes the theoretical approach. The characterisation of the mechanical properties of the rock mass, viewed as a unit consisting of intact rock and fractures, is achieved by numerical simulations with following input parameters: initial stresses, fracture geometry, distribution of rock mechanical properties, such as deformation and strength parameters, for the intact rock and for the fractures. The numerical modelling was performed with the two-dimensional code UDEC, and the rock block models were generated from 2D trace sections extracted from the 3D Discrete Fracture Network (DFN) model. Assumptions and uncertainties related to the set-up of the model are considered. The numerical model was set-up to simulate a plain strain-loading test. Different boundary conditions were applied on the model for simulating stress conditions (I) in the undisturbed rock mass, and (II) at the proximity of a tunnel. In order to assess the reliability of the model sensitivity analyses have been conducted on some rock block models for defining the dependency of mechanical properties to in situ stresses, the influence of boundary conditions, rock material and joint constitutive models used to simulate the behaviour of intact rock and fractures, domain size and anisotropy. To
Theoretical dissolution model of poly-disperse drug particles in biorelevant media.
Okazaki, Arimichi; Mano, Takashi; Sugano, Kiyohiko
2008-05-01
The purpose of the present study was to construct the theoretical dissolution model of poly-disperse drug particles in biorelevant media containing bile salt/ lecithin aggregates (micelles or vesicles). The effective diffusion coefficient in the biorelevant medium and the particle size distribution of drug particles were simultaneously factored into the Nernst-Brunner equation. The effective diffusion coefficient of a drug in the biorelevant medium was calculated to be smaller than that in the blank buffer, since the diffusion coefficient of a drug bound to the aggregates became similar to that of the aggregates. The particle size distribution of a drug powder was simulated as the sum of mono-disperse fractions covering the particle size range. To verify the modified equation, the dissolution profile of griseofulvin and danazol in a taurocholic acid/egg lecithin (4:1 mixture, taurocholic acid = 0-30 mM) system was investigated. It was clearly demonstrated that both modifications on the Nernst-Brunner equation improved the prediction accuracy. When the effect of the particle size distribution was neglected, the theoretical curve underestimated the observed value at the early phase of dissolution process. When the diffusion coefficient of a free drug was used instead of the effective diffusion coefficient, the theoretical curve overestimated the observed value. The results of the present study suggested that the effect of the particle size distribution and the effective diffusion coefficient should be taken into consideration.
A general mixture model for mapping quantitative trait loci by using molecular markers
Jansen, R.C.
1992-01-01
In a segregating population a quantitative trait may be considered to follow a mixture of (normal) distributions, the mixing proportions being based on Mendelian segregation rules. A general and flexible mixture model is proposed for mapping quantitative trait loci (QTLs) by using molecular markers.
Clinicians and policy makers need the ability to predict quantitatively how childhood bodyweight will respond to obesity interventions. We developed and validated a mathematical model of childhood energy balance that accounts for healthy growth and development of obesity, and that makes quantitative...
Directory of Open Access Journals (Sweden)
Da-Ming Yeh
Full Text Available This study examined the feasibility of quantitatively evaluating multiple biokinetic models and established the validity of the different compartment models using an assembled water phantom. Most commercialized phantoms are made to survey the imaging system since this is essential to increase the diagnostic accuracy for quality assurance. In contrast, few customized phantoms are specifically made to represent multi-compartment biokinetic models. This is because the complicated calculations as defined to solve the biokinetic models and the time-consuming verifications of the obtained solutions are impeded greatly the progress over the past decade. Nevertheless, in this work, five biokinetic models were separately defined by five groups of simultaneous differential equations to obtain the time-dependent radioactive concentration changes inside the water phantom. The water phantom was assembled by seven acrylic boxes in four different sizes, and the boxes were linked to varying combinations of hoses to signify the multiple biokinetic models from the biomedical perspective. The boxes that were connected by hoses were then regarded as a closed water loop with only one infusion and drain. 129.1±24.2 MBq of Tc-99m labeled methylene diphosphonate (MDP solution was thoroughly infused into the water boxes before gamma scanning; then the water was replaced with de-ionized water to simulate the biological removal rate among the boxes. The water was driven by an automatic infusion pump at 6.7 c.c./min, while the biological half-life of the four different-sized boxes (64, 144, 252, and 612 c.c. was 4.8, 10.7, 18.8, and 45.5 min, respectively. The five models of derived time-dependent concentrations for the boxes were estimated either by a self-developed program run in MATLAB or by scanning via a gamma camera facility. Either agreement or disagreement between the practical scanning and the theoretical prediction in five models was thoroughly discussed. The
Image-model coupling: a simple information theoretic perspective for image sequences
Directory of Open Access Journals (Sweden)
N. D. Smith
2009-03-01
Full Text Available Images are widely used to visualise physical processes. Models may be developed which attempt to replicate those processes and their effects. The technique of coupling model output to images, which is here called "image-model coupling", may be used to help understand the underlying physical processes, and better understand the limitations of the models. An information theoretic framework is presented for image-model coupling in the context of communication along a discrete channel. The physical process may be regarded as a transmitter of images and the model as part of a receiver which decodes or recognises those images. Image-model coupling may therefore be interpreted as image recognition. Of interest are physical processes which exhibit "memory". The response of such a system is not only dependent on the current values of driver variables, but also on the recent history of drivers and/or system description. Examples of such systems in geophysics include the ionosphere and Earth's climate. The discrete channel model is used to help derive expressions for matching images and model output, and help analyse the coupling.
Image-model coupling: a simple information theoretic perspective for image sequences
Smith, N. D.; Mitchell, C. N.; Budd, C. J.
2009-03-01
Images are widely used to visualise physical processes. Models may be developed which attempt to replicate those processes and their effects. The technique of coupling model output to images, which is here called "image-model coupling", may be used to help understand the underlying physical processes, and better understand the limitations of the models. An information theoretic framework is presented for image-model coupling in the context of communication along a discrete channel. The physical process may be regarded as a transmitter of images and the model as part of a receiver which decodes or recognises those images. Image-model coupling may therefore be interpreted as image recognition. Of interest are physical processes which exhibit "memory". The response of such a system is not only dependent on the current values of driver variables, but also on the recent history of drivers and/or system description. Examples of such systems in geophysics include the ionosphere and Earth's climate. The discrete channel model is used to help derive expressions for matching images and model output, and help analyse the coupling.
Ursino, Mauro; Cuppini, Cristiano; Magosso, Elisa
2017-03-01
Recent theoretical and experimental studies suggest that in multisensory conditions, the brain performs a near-optimal Bayesian estimate of external events, giving more weight to the more reliable stimuli. However, the neural mechanisms responsible for this behavior, and its progressive maturation in a multisensory environment, are still insufficiently understood. The aim of this letter is to analyze this problem with a neural network model of audiovisual integration, based on probabilistic population coding-the idea that a population of neurons can encode probability functions to perform Bayesian inference. The model consists of two chains of unisensory neurons (auditory and visual) topologically organized. They receive the corresponding input through a plastic receptive field and reciprocally exchange plastic cross-modal synapses, which encode the spatial co-occurrence of visual-auditory inputs. A third chain of multisensory neurons performs a simple sum of auditory and visual excitations. The work includes a theoretical part and a computer simulation study. We show how a simple rule for synapse learning (consisting of Hebbian reinforcement and a decay term) can be used during training to shrink the receptive fields and encode the unisensory likelihood functions. Hence, after training, each unisensory area realizes a maximum likelihood estimate of stimulus position (auditory or visual). In cross-modal conditions, the same learning rule can encode information on prior probability into the cross-modal synapses. Computer simulations confirm the theoretical results and show that the proposed network can realize a maximum likelihood estimate of auditory (or visual) positions in unimodal conditions and a Bayesian estimate, with moderate deviations from optimality, in cross-modal conditions. Furthermore, the model explains the ventriloquism illusion and, looking at the activity in the multimodal neurons, explains the automatic reweighting of auditory and visual inputs
Ding, Ya
2018-01-01
In recent years, many areas of China have been facing increasing problems of soil erosion and land degradation. Conservation tillage, with both economic and ecological benefits, provides a good avenue for Chinese farmers to conserve land as well as secure food production. However, the adoption rate of conservation tillage systems is very low in China. In this paper, the author constructs a theoretical model to explain a farmer’s adoption decision of conservation tillage. The goal is to investigate potential reasons behind the low adoption rate and explores alternative policy tools that can help improve a farmer’s incentive to adopt conservation tillage in China.
The mathematical and theoretical biology institute--a model of mentorship through research.
Camacho, Erika T; Kribs-Zaleta, Christopher M; Wirkus, Stephen
2013-01-01
This article details the history, logistical operations, and design philosophy of the Mathematical and Theoretical Biology Institute (MTBI), a nationally recognized research program with an 18-year history of mentoring researchers at every level from high school through university faculty, increasing the number of researchers from historically underrepresented minorities, and motivating them to pursue research careers by allowing them to work on problems of interest to them and supporting them in this endeavor. This mosaic profile highlights how MTBI provides a replicable multi-level model for research mentorship.
The interaction of solar p-modes with a sunspot. II - Simple theoretical models
Abdelatif, Toufik E.; Thomas, John H.
1987-09-01
The interaction of solar p-modes with a sunspot magnetic flux tube is investigated theoretically by means of two simple models. An increase in horizontal wavelength between the nonmagnetic and magnetic regions, due to the different characteristic wave speeds in the two regions, explains the corresponding observed wavelength shift of powe in the umbral k-omega power spectrum. The variation of the transmission coefficient with wavenumber along the p-mode diagnostic curves, due to resonant transmission, is responsible for the observed selective filtering of the p-modes by the sunspot.
Construction of a Dejourian theoretic model for the worker’s health evaluation
Directory of Open Access Journals (Sweden)
Marilise Katsurayama
2012-09-01
Full Text Available Objective: To construct a theoretical model intended for evaluative researches on the worker’s health into the Family Health Strategy, supported by Christophe Dejours’s theory. Methods: The theoretical model was built between September and December 2010, through the integration of Dejours’s theory and the facts, the current situation of health workers, represented by official documents, which allowed the instrumentalization of thought, making affordable the features related to worker’s health. Results: Dejourian theory was adopted as the basic element in the construction of this model, due to its approach to the dynamics of the mental processes involved in the confrontation between the subject and his reality in work, focusing the interest in the worker’s subjective experiences (main source of pleasure and suffering in work. Given the influence played by the work organization on the worker’s health, it becomes essential to perform the analysis of the variables that influence pleasure-suffering process among these actors of such an importance for the health practices that reorganize the primary care. Conclusion: Christophe Dejours’s theory reveals its great potential for the analysis of the psychological processes involved in the confrontation between the worker in primary care and his reality, since it is vital to understand how the issues related to work in health area are disclosed, and the way the worker shall react to difficult work situations.
The neural mediators of kindness-based meditation: a theoretical model
Mascaro, Jennifer S.; Darcher, Alana; Negi, Lobsang T.; Raison, Charles L.
2015-01-01
Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here, we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another’s affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work. PMID:25729374
The neural mediators of kindness-based meditation: a theoretical model
Directory of Open Access Journals (Sweden)
Jennifer Streiffer Mascaro
2015-02-01
Full Text Available Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another’s affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work.
The neural mediators of kindness-based meditation: a theoretical model.
Mascaro, Jennifer S; Darcher, Alana; Negi, Lobsang T; Raison, Charles L
2015-01-01
Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here, we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another's affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work.
Directory of Open Access Journals (Sweden)
Alvaro Ruíz-Baltazar
2015-12-01
Full Text Available In this research, the adsorption capacity of Ag nanoparticles on natural zeolite from Oaxaca is presented. In order to describe the adsorption mechanism of silver nanoparticles on zeolite, experimental adsorption models for Ag ions and Ag nanoparticles were carried out. These experimental data obtained by the atomic absorption spectrophotometry technique were compared with theoretical models such as Lagergren first-order, pseudo-second-order, Elovich, and intraparticle diffusion. Correlation factors R2 of the order of 0.99 were observed. Analysis by transmission electron microscopy describes the distribution of the silver nanoparticles on the zeolite outer surface. Additionally, a chemical characterization of the material was carried out through a dilution process with lithium metaborate. An average value of 9.3 in the Si/Al ratio was observed. Factors such as the adsorption behavior of the silver ions and the Si/Al ratio of the zeolite are very important to support the theoretical models and establish the adsorption mechanism of Ag nanoparticles on natural zeolite.
Directory of Open Access Journals (Sweden)
Milad Elyasi
2014-04-01
Full Text Available In the recent decade, studying the economic order quantity (EOQ models with imperfect quality has appealed to many researchers. Only few papers are published discussing EOQ models with imperfect items in a supply chain. In this paper, a two-echelon decentralized supply chain consisting of a manufacture and a supplier that both face just in time (JIT inventory problem is considered. It is sought to find the optimal number of the shipments and the quantity of each shipment in a way that minimizes the both manufacturer’s and the supplier’s cost functions. To the authors’ best knowledge, this is the first paper that deals with imperfect items in a decentralized supply chain. Thereby, three different game theoretical solution approaches consisting of two non-cooperative games and a cooperative game are proposed. Comparing the results of three different scenarios with those of the centralized model, the conclusions are drawn to obtain the best approach.
Activity systems modeling as a theoretical lens for social exchange studies
Directory of Open Access Journals (Sweden)
Ernest Jones
2016-01-01
Full Text Available The social exchange perspective seeks to acknowledge, understand and predict the dynamics of social interactions. Empirical research involving social exchange constructs have grown to be highly technical including confirmatory factor analysis to assess construct distinctiveness and structural equation modeling to assess construct causality. Each study seemingly strives to assess how underlying social exchange theoretic constructs interrelate. Yet despite this methodological depth and resultant explanatory and predictive power, a significant number of studies report findings that, once synthesized, suggest an underlying persistent threat of conceptual or construct validity brought about by a search for epistemological parsimony. Further, it is argued that a methodological approach that embraces inherent complexity such as activity systems modeling facilitates the search for simplified models while not ignoring contextual factors.
Brian K. Via; chi L. So; Leslie H. Groom; Todd F. Shupe; michael Stine; Jan. Wikaira
2007-01-01
A theoretical model was built predicting the relationship between microfibril angle and lignin content at the Angstrom (A) level. Both theoretical and statistical examination of experimental data supports a square root transformation of lignin to predict microfibril angle. The experimental material used came from 10 longleaf pine (Pinus palustris)...
Guiffrida, Douglas A.
2005-01-01
The author presents a critical review of counselor education literature that has focused on student acquisition of theoretical orientations in order to identify the potential of these practices to facilitate critical self-reflection and theoretical fit among students. Two reflective, awareness-based pedagogical models--radical constructivism (E.…
Poroelastic behaviors of the osteon: A comparison of two theoretical osteon models
Wu, Xiao-Gang; Chen, Wei-Yi
2013-08-01
In the paper, two theoretical poroelastic osteon models are presented to compare their poroelastic behaviors, one is the hollow osteon model (Haversian fluid is neglected) and the other is the osteon model with Haversian fluid considered. They both have the same two types of impermeable exterior boundary conditions, one is elastic restraint and the other is displacement constrained, which can be used for analyzing other experiments performed on similarly shaped poroelastic specimens. The obtained analytical pressure and velocity solutions demonstrate the effects of the loading factors and the material parameters, which may have a significant stimulus to the mechanotransduction of bone remodeling signals. Model comparisons indicate: (1) The Haversian fluid can enhance the whole osteonal fluid pressure and velocity fields. (2) In the hollow model, the key loading factor governing the poroelastic behavior of the osteon is strain rate, while in the model with Haversian fluid considered, the strain rate governs only the velocity. (3) The pressure amplitude is proportional to the loading frequency in the hollow model, while in the model with Haversian fluid considered, the loading frequency has little effect on the pressure amplitude.
Game Theoretic Modeling of Water Resources Allocation Under Hydro-Climatic Uncertainty
Brown, C.; Lall, U.; Siegfried, T.
2005-12-01
Typical hydrologic and economic modeling approaches rely on assumptions of climate stationarity and economic conditions of ideal markets and rational decision-makers. In this study, we incorporate hydroclimatic variability with a game theoretic approach to simulate and evaluate common water allocation paradigms. Game Theory may be particularly appropriate for modeling water allocation decisions. First, a game theoretic approach allows economic analysis in situations where price theory doesn't apply, which is typically the case in water resources where markets are thin, players are few, and rules of exchange are highly constrained by legal or cultural traditions. Previous studies confirm that game theory is applicable to water resources decision problems, yet applications and modeling based on these principles is only rarely observed in the literature. Second, there are numerous existing theoretical and empirical studies of specific games and human behavior that may be applied in the development of predictive water allocation models. With this framework, one can evaluate alternative orderings and rules regarding the fraction of available water that one is allowed to appropriate. Specific attributes of the players involved in water resources management complicate the determination of solutions to game theory models. While an analytical approach will be useful for providing general insights, the variety of preference structures of individual players in a realistic water scenario will likely require a simulation approach. We propose a simulation approach incorporating the rationality, self-interest and equilibrium concepts of game theory with an agent-based modeling framework that allows the distinct properties of each player to be expressed and allows the performance of the system to manifest the integrative effect of these factors. Underlying this framework, we apply a realistic representation of spatio-temporal hydrologic variability and incorporate the impact of
Photon-tissue interaction model for quantitative assessment of biological tissues
Lee, Seung Yup; Lloyd, William R.; Wilson, Robert H.; Chandra, Malavika; McKenna, Barbara; Simeone, Diane; Scheiman, James; Mycek, Mary-Ann
2014-02-01
In this study, we describe a direct fit photon-tissue interaction model to quantitatively analyze reflectance spectra of biological tissue samples. The model rapidly extracts biologically-relevant parameters associated with tissue optical scattering and absorption. This model was employed to analyze reflectance spectra acquired from freshly excised human pancreatic pre-cancerous tissues (intraductal papillary mucinous neoplasm (IPMN), a common precursor lesion to pancreatic cancer). Compared to previously reported models, the direct fit model improved fit accuracy and speed. Thus, these results suggest that such models could serve as real-time, quantitative tools to characterize biological tissues assessed with reflectance spectroscopy.
Modeling Cancer Metastasis using Global, Quantitative and Integrative Network Biology
DEFF Research Database (Denmark)
Schoof, Erwin; Erler, Janine
, with genomic modifications giving rise to differential protein dynamics, ultimately resulting in disease. The exact molecular signaling networks underlying specific disease phenotypes remain elusive, as the definition thereof requires extensive analysis of not only the genomic and proteomic landscapes within...... of my PhD in an attempt to positively contribute to this fundamental challenge. The thesis is divided into four parts. In Chapter I, we introduce the complexity of cancer, and describe some underlying causes and ways to study the disease from different molecular perspectives. There is a nearly infinite...... understanding of molecular processes which are fundamental to tumorigenesis. In Article 1, we propose a novel framework for how cancer mutations can be studied by taking into account their effect at the protein network level. In Article 2, we demonstrate how global, quantitative data on phosphorylation dynamics...
2013-01-01
Background In the present study, we show the correlation of quantum chemical structural descriptors with the activation barriers of the Diels-Alder ligations. A set of 72 non-catalysed Diels-Alder reactions were subjected to quantitative structure-activation barrier relationship (QSABR) under the framework of theoretical quantum chemical descriptors calculated solely from the structures of diene and dienophile reactants. Experimental activation barrier data were obtained from literature. Descriptors were computed using Hartree-Fock theory using 6-31G(d) basis set as implemented in Gaussian 09 software. Results Variable selection and model development were carried out by stepwise multiple linear regression methodology. Predictive performance of the quantitative structure-activation barrier relationship (QSABR) model was assessed by training and test set concept and by calculating leave-one-out cross-validated Q2 and predictive R2 values. The QSABR model can explain and predict 86.5% and 80% of the variances, respectively, in the activation energy barrier training data. Alternatively, a neural network model based on back propagation of errors was developed to assess the nonlinearity of the sought correlations between theoretical descriptors and experimental reaction barriers. Conclusions A reasonable predictability for the activation barrier of the test set reactions was obtained, which enabled an exploration and interpretation of the significant variables responsible for Diels-Alder interaction between dienes and dienophiles. Thus, studies in the direction of QSABR modelling that provide efficient and fast prediction of activation barriers of the Diels-Alder reactions turn out to be a meaningful alternative to transition state theory based computation. PMID:24171724
Lu, Y.; Duursma, R.; Farrior, C.; Medlyn, B. E.
2016-12-01
Stomata control the exchange of soil water for atmospheric CO2, which is one of the most important resource trade-offs for plants. This trade-off has been studied a lot but not in the context of competition. Based on the theory of evolutionarily stable strategy, we search for the uninvadable (or the ESS) response of stomatal conductance to soil water content under stochastic rainfall, with which the dominant plant population should never be invaded by any rare mutants in the water competition due to a higher fitness. In this study, we define the fitness as the difference between the long-term average photosynthetic carbon gain and a carbon cost of stomatal opening. This cost has traditionally been considered an unknown constant. Here we extend this framework by assuming it as the energy required for xylem embolism refilling. With regard to the refilling process, we explore 2 questions 1) to what extent the embolized xylem vessels can be repaired via refilling; and 2) whether this refilling is immediate or has a time delay following the formation of xylem embolism. We compare various assumptions in a total of 5 scenarios and find that the ESS exists only if the xylem damage can be repaired completely. Then, with this ESS, we estimate annual vegetation photosynthesis and water consumption and compare them with empirical results. In conclusion, this study provides a different insight from the existing empirical and mechanistic models as well as the theoretical models based on the optimization theory. In addition, as the model result is a simple quantitative relation between stomatal conductance and soil water content, it can be easily incorporated into other vegetation function models.
The Safety Culture Enactment Questionnaire (SCEQ): Theoretical model and empirical validation.
de Castro, Borja López; Gracia, Francisco J; Tomás, Inés; Peiró, José M
2017-06-01
This paper presents the Safety Culture Enactment Questionnaire (SCEQ), designed to assess the degree to which safety is an enacted value in the day-to-day running of nuclear power plants (NPPs). The SCEQ is based on a theoretical safety culture model that is manifested in three fundamental components of the functioning and operation of any organization: strategic decisions, human resources practices, and daily activities and behaviors. The extent to which the importance of safety is enacted in each of these three components provides information about the pervasiveness of the safety culture in the NPP. To validate the SCEQ and the model on which it is based, two separate studies were carried out with data collection in 2008 and 2014, respectively. In Study 1, the SCEQ was administered to the employees of two Spanish NPPs (N=533) belonging to the same company. Participants in Study 2 included 598 employees from the same NPPs, who completed the SCEQ and other questionnaires measuring different safety outcomes (safety climate, safety satisfaction, job satisfaction and risky behaviors). Study 1 comprised item formulation and examination of the factorial structure and reliability of the SCEQ. Study 2 tested internal consistency and provided evidence of factorial validity, validity based on relationships with other variables, and discriminant validity between the SCEQ and safety climate. Exploratory Factor Analysis (EFA) carried out in Study 1 revealed a three-factor solution corresponding to the three components of the theoretical model. Reliability analyses showed strong internal consistency for the three scales of the SCEQ, and each of the 21 items on the questionnaire contributed to the homogeneity of its theoretically developed scale. Confirmatory Factor Analysis (CFA) carried out in Study 2 supported the internal structure of the SCEQ; internal consistency of the scales was also supported. Furthermore, the three scales of the SCEQ showed the expected correlation
The custodially protected Randall-Sundrum model. Theoretical aspects and flavour phenomenology
Energy Technology Data Exchange (ETDEWEB)
Blanke, Monika
2009-07-24
Models with a warped extra dimension, so-called Randall-Sundrum models, provide an appealing solution to the gauge and flavour hierarchy problems of the Standard Model. After introducing the theoretical basics of such models, we concentrate on a specific model whose symmetry structure is extended to protect the T parameter and the Zb{sub L} anti b{sub L} coupling from large corrections. We introduce the basic action and discuss in detail effects of electroweak symmetry breaking and the flavour structure of the model. Then we analyse meson-antimeson mixing and rare decays that are affected by new tree level contributions from the Kaluza-Klein modes of the gauge bosons and from the Z boson in an important manner. After deriving analytic expressions for the most important K and B physics observables, we perform a global numerical analysis of the new effects in the model in question. We confirm the recent findings that a stringent constraint on the model is placed by CP-violation in K{sup 0} - anti K{sup 0} mixing. However, even for Kaluza-Klein particles in the reach of the LHC an agreement with all available data can be obtained without significant fine-tuning. We find possible large effects in either CP-violating effects in the B{sub s} - anti B{sub s} system or in the rare K decays, but not simultaneously. In any case the deviations from the Standard Model predictions in the rare B decays are small and difficult to measure. The specific pattern of new flavour effects allows to distinguish this model from other New Physics frameworks, which we demonstrate explicitly for the case of models with Minimal Flavour Violation and for the Littlest Higgs model with T-parity. (orig.)
Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...
Theoretical Model of Coronary Blood Flow Regulation: Role of Myocardium Compressive Forces.
Xie, Xinzhou; Wang, Yuanyuan
2015-11-01
Auto-regulatory reserve of coronary blood flow is nonuniformly distributed across the ventricular wall. MCF are thought to play an important role in determining the transmural distribution of myocardium blood flow. Here, impacts of MCF on coronary flow regulation are analyzed using a theoretical model. Coronary microvessels at various depths in the ventricular wall are represented by parallel segments. Nine vessel regions are connected in series to represent one parallel segment, which includes four vasoactive regions regulated by the wall tension, the shear stress and the metabolic demand. The nonuniform distribution of MCF is modeled and its effects on coronary flow regulation are taken into consideration by using a modified tension model and a vessel collapse model. Flow regulation behaviors in both normal and obstructed coronary circulation are simulated. Model-predicted auto-regulatory curve is shifted to the high pressure region by including the effect of MCF. Model-predicted flow distributions in obstructed coronary circulation show that severe stenosis in coronary artery would first impede myocardial blood flow in subendocardial layer. The model results indicate that MCF plays an important role in coronary flow regulation and also in determining the transmural distribution of myocardium blood flow. © 2015 John Wiley & Sons Ltd.
Modeling of air-gap membrane distillation process: A theoretical and experimental study
Alsaadi, Ahmad Salem
2013-06-03
A one dimensional (1-D) air gap membrane distillation (AGMD) model for flat sheet type modules has been developed. This model is based on mathematical equations that describe the heat and mass transfer mechanisms of a single-stage AGMD process. It can simulate AGMD modules in both co-current and counter-current flow regimes. The theoretical model was validated using AGMD experimental data obtained under different operating conditions and parameters. The predicted water vapor flux was compared to the flux measured at five different feed water temperatures, two different feed water salinities, three different air gap widths and two MD membranes with different average pore sizes. This comparison showed that the model flux predictions are strongly correlated with the experimental data, with model predictions being within +10% of the experimentally determined values. The model was then used to study and analyze the parameters that have significant effect on scaling-up the AGMD process such as the effect of increasing the membrane length, and feed and coolant flow rates. The model was also used to analyze the maximum thermal efficiency of the AGMD process by tracing changes in water production rate and the heat input to the process along the membrane length. This was used to understand the gain in both process production and thermal efficiency for different membrane surface areas and the resultant increases in process capital and water unit cost. © 2013 Elsevier B.V.
Theoretical model for mesoscopic-level scale-free self-organization of functional brain networks.
Piersa, Jaroslaw; Piekniewski, Filip; Schreiber, Tomasz
2010-11-01
In this paper, we provide theoretical and numerical analysis of a geometric activity flow network model which is aimed at explaining mathematically the scale-free functional graph self-organization phenomena emerging in complex nervous systems at a mesoscale level. In our model, each unit corresponds to a large number of neurons and may be roughly seen as abstracting the functional behavior exhibited by a single voxel under functional magnetic resonance imaging (fMRI). In the course of the dynamics, the units exchange portions of formal charge, which correspond to waves of activity in the underlying microscale neuronal circuit. The geometric model abstracts away the neuronal complexity and is mathematically tractable, which allows us to establish explicit results on its ground states and the resulting charge transfer graph modeling functional graph of the network. We show that, for a wide choice of parameters and geometrical setups, our model yields a scale-free functional connectivity with the exponent approaching 2, which is in agreement with previous empirical studies based on fMRI. The level of universality of the presented theory allows us to claim that the model does shed light on mesoscale functional self-organization phenomena of the nervous system, even without resorting to closer details of brain connectivity geometry which often remain unknown. The material presented here significantly extends our previous work where a simplified mean-field model in a similar spirit was constructed, ignoring the underlying network geometry.
A theoretical model for predicting the Peak Cutting Force of conical picks
Directory of Open Access Journals (Sweden)
Gao Kuidong
2014-01-01
Full Text Available In order to predict the PCF (Peak Cutting Force of conical pick in rock cutting process, a theoretical model is established based on elastic fracture mechanics theory. The vertical fracture model of rock cutting fragment is also established based on the maximum tensile criterion. The relation between vertical fracture angle and associated parameters (cutting parameter and ratio B of rock compressive strength to tensile strength is obtained by numerical analysis method and polynomial regression method, and the correctness of rock vertical fracture model is verified through experiments. Linear regression coefficient between the PCF of prediction and experiments is 0.81, and significance level less than 0.05 shows that the model for predicting the PCF is correct and reliable. A comparative analysis between the PCF obtained from this model and Evans model reveals that the result of this prediction model is more reliable and accurate. The results of this work could provide some guidance for studying the rock cutting theory of conical pick and designing the cutting mechanism.
Theoretical study on the inverse modeling of deep body temperature measurement.
Huang, Ming; Chen, Wenxi
2012-03-01
We evaluated the theoretical aspects of monitoring the deep body temperature distribution with the inverse modeling method. A two-dimensional model was built based on anatomical structure to simulate the human abdomen. By integrating biophysical and physiological information, the deep body temperature distribution was estimated from cutaneous surface temperature measurements using an inverse quasilinear method. Simulations were conducted with and without the heat effect of blood perfusion in the muscle and skin layers. The results of the simulations showed consistently that the noise characteristics and arrangement of the temperature sensors were the major factors affecting the accuracy of the inverse solution. With temperature sensors of 0.05 °C systematic error and an optimized 16-sensor arrangement, the inverse method could estimate the deep body temperature distribution with an average absolute error of less than 0.20 °C. The results of this theoretical study suggest that it is possible to reconstruct the deep body temperature distribution with the inverse method and that this approach merits further investigation.
A THEORETICAL MODEL OF SOCIO-PSYCHOLOGICAL SUPPORT WORK PROCESSES FOR MANAGEMENT OF PRODUCTION TEAM
Directory of Open Access Journals (Sweden)
Tatyana Gennadevna Pronyushkina
2015-10-01
Full Text Available This article discusses the management of production team, in particular the developed theoretical model of socio-psychological support work processes for management of production team. The author of the research are formulated the purpose and objectives of social-psychological work on management of the production team. Developed in the study a theoretical model aimed at determining the conditions and the identification of features of effective management of the enterprise taking into account the socio-psychological characteristics of its staff. Tasks include: definition of the main characteristics of the production team and their severity, the analysis of these characteristics and identifying opportunities for their transformation, development of recommendations for management of social-psychological work on effects on the characteristics of the collective enterprise.Practical study of the activities of a number of businesses have shown the need to improve socio-psychological support of management processes production team: introducing a social and psychological planning team and develop the practice of sociological research on the state of the team, to ensure the smoothing of relations between workers and management through periodic meetings, creations of conditions for feedback, maintaining healthy competition among team members.
Paggeot, Amy; Nelson, Sharon; Huprich, Steven
2017-10-12
The role of theoretical orientation in determining preference for different methods of diagnosis has been largely unexplored. The goal of the present study was to explore ratings of the usefulness of 4 diagnostic methods after applying them to a patient: prototype ratings derived from the SWAP-II, the DSM-5 Section III specific personality disorders, the DSM-5 Section III trait model, and prototype ratings derived from the Psychodynamic Diagnostic Manual (PDM). Three hundred and twenty-nine trainees in APA-accredited doctoral programs and internships rated one of their current patients with each of the 4 diagnostic methods. Individuals who classified their theoretical orientation as "cognitive- behavioral" displayed a significantly greater preference for the proposed DSM-5 personality disorder prototypes when compared to individuals who classified their orientation as "psychodynamic/psychoanalytic," while individuals who considered themselves psychodynamic or psychoanalytic rated the PDM as significantly more useful than those who considered themselves cognitive-behavioral. Individuals who classified their graduate program as a PsyD program were also more likely to rate the DSM-5 Section III and PDM models as more useful diagnostic methods than individuals who classified their graduate program as a PhD program. Implications and future directions will be discussed. © 2017 S. Karger AG, Basel.
Theoretical models to predict the mechanical behavior of thick composite tubes
Directory of Open Access Journals (Sweden)
Volnei Tita
2012-02-01
Full Text Available This paper shows theoretical models (analytical formulations to predict the mechanical behavior of thick composite tubes and how some parameters can influence this behavior. Thus, firstly, it was developed the analytical formulations for a pressurized tube made of composite material with a single thick ply and only one lamination angle. For this case, the stress distribution and the displacement fields are investigated as function of different lamination angles and reinforcement volume fractions. The results obtained by the theoretical model are physic consistent and coherent with the literature information. After that, the previous formulations are extended in order to predict the mechanical behavior of a thick laminated tube. Both analytical formulations are implemented as a computational tool via Matlab code. The results obtained by the computational tool are compared to the finite element analyses, and the stress distribution is considered coherent. Moreover, the engineering computational tool is used to perform failure analysis, using different types of failure criteria, which identifies the damaged ply and the mode of failure.
Linear regression models for quantitative assessment of left ...
African Journals Online (AJOL)
Changes in left ventricular structures and function have been reported in cardiomyopathies. No prediction models have been established in this environment. This study established regression models for prediction of left ventricular structures in normal subjects. A sample of normal subjects was drawn from a large urban ...
Evans, Jason; Sullivan, Jack
2011-01-01
A priori selection of models for use in phylogeny estimation from molecular sequence data is increasingly important as the number and complexity of available models increases. The Bayesian information criterion (BIC) and the derivative decision-theoretic (DT) approaches rely on a conservative approximation to estimate the posterior probability of a given model. Here, we extended the DT method by using reversible jump Markov chain Monte Carlo approaches to directly estimate model probabilities for an extended candidate pool of all 406 special cases of the general time reversible + Γ family. We analyzed 250 diverse data sets in order to evaluate the effectiveness of the BIC approximation for model selection under the BIC and DT approaches. Model choice under DT differed between the BIC approximation and direct estimation methods for 45% of the data sets (113/250), and differing model choice resulted in significantly different sets of trees in the posterior distributions for 26% of the data sets (64/250). The model with the lowest BIC score differed from the model with the highest posterior probability in 30% of the data sets (76/250). When the data indicate a clear model preference, the BIC approximation works well enough to result in the same model selection as with directly estimated model probabilities, but a substantial proportion of biological data sets lack this characteristic, which leads to selection of underparametrized models.
Food addiction spectrum: a theoretical model from normality to eating and overeating disorders.
Piccinni, Armando; Marazziti, Donatella; Vanelli, Federica; Franceschini, Caterina; Baroni, Stefano; Costanzo, Davide; Cremone, Ivan Mirko; Veltri, Antonello; Dell'Osso, Liliana
2015-01-01
The authors comment on the recently proposed food addiction spectrum that represents a theoretical model to understand the continuum between several conditions ranging from normality to pathological states, including eating disorders and obesity, as well as why some individuals show a peculiar attachment to food that can become an addiction. Further, they review the possible neurobiological underpinnings of these conditions that include dopaminergic neurotransmission and circuits that have long been implicated in drug addiction. The aim of this article is also that at stimulating a debate regarding the possible model of a food (or eating) addiction spectrum that may be helpful towards the search of novel therapeutic approaches to different pathological states related to disturbed feeding or overeating.
Slater, Michael D
2006-01-01
While increasingly widespread use of behavior change theory is an advance for communication campaigns and their evaluation, such theories provide a necessary but not sufficient condition for theory-based communication interventions. Such interventions and their evaluations need to incorporate theoretical thinking about plausible mechanisms of message effect on health-related attitudes and behavior. Otherwise, strategic errors in message design and dissemination, and misspecified campaign logic models, insensitive to campaign effects, are likely to result. Implications of the elaboration likelihood model, attitude accessibility, attitude to the ad theory, exemplification, and framing are explored, and implications for campaign strategy and evaluation designs are briefly discussed. Initial propositions are advanced regarding a theory of campaign affect generalization derived from attitude to ad theory, and regarding a theory of reframing targeted health behaviors in those difficult contexts in which intended audiences are resistant to the advocated behavior or message.
A unified theoretical framework for mapping models for the multi-state Hamiltonian.
Liu, Jian
2016-11-28
We propose a new unified theoretical framework to construct equivalent representations of the multi-state Hamiltonian operator and present several approaches for the mapping onto the Cartesian phase space. After mapping an F-dimensional Hamiltonian onto an F+1 dimensional space, creation and annihilation operators are defined such that the F+1 dimensional space is complete for any combined excitation. Commutation and anti-commutation relations are then naturally derived, which show that the underlying degrees of freedom are neither bosons nor fermions. This sets the scene for developing equivalent expressions of the Hamiltonian operator in quantum mechanics and their classical/semiclassical counterparts. Six mapping models are presented as examples. The framework also offers a novel way to derive such as the well-known Meyer-Miller model.
Theoretical modeling of a two-phase thermosyphon assuming the liquids reservoir
Energy Technology Data Exchange (ETDEWEB)
Zanardi, M.A. [UNESP, Guaratingueta, SP (Brazil). Faculdade de Engenharia. Dept. de Energia; Leite, N.G.C. [Universidade do Estado do Rio de Janeiro (UERJ), Resende, RJ (Brazil). Faculdade de Tecnologia. Dept. de Mecanica e Energia]. E-mail: nleite@fat.uerj.br
2007-06-15
A theoretical mod sling using the mass, momentum and energy conservation equations, about the intrinsic phenomena in the working of a cylindrical geometry two-phase thermosyphon operating on vertical was performed. The conservation equations were solved in steady-state operation for all the phases of the thermosyphon. Then model also assumed the presence of a liquid reservoir whose valves of the coefficient of heat transfer that determine the operation of functioning in the reservoir, were obtained from the correlation published in literature The set of conservation equations was solved by using the method of fl nite volumes. The results achieved were checked with experimental data from literature and also from specific experiments performed in laboratory. In a general view, the the oric results matched reasonably well with those ones from the experiments, and the observed deviation were assumed by a inadequate prevision of the reservoir model used, besides keeping a stable level of the reservoir of liquid. (author)
Chiang, Cheng-Wei; Ramsey-Musolf, Michael J.; Senaha, Eibun
2018-01-01
We analyze the theoretical and phenomenological considerations for the electroweak phase transition and dark matter in an extension of the standard model with a complex scalar singlet (cxSM). In contrast with earlier studies, we use a renormalization group improved scalar potential and treat its thermal history in a gauge-invariant manner. We find that the parameter space consistent with a strong first-order electroweak phase transition (SFOEWPT) and present dark matter phenomenological constraints is significantly restricted compared to results of a conventional, gauge-noninvariant analysis. In the simplest variant of the cxSM, recent LUX data and a SFOEWPT require a dark matter mass close to half the mass of the standard model-like Higgs boson. We also comment on various caveats regarding the perturbative treatment of the phase transition dynamics.
Privman, Vladimir; Zavalov, Oleksandr; Halámková, Lenka; Moseley, Fiona; Halámek, Jan; Katz, Evgeny
2013-12-05
We report the first study of a network of connected enzyme-catalyzed reactions, with added chemical and enzymatic processes that incorporate the recently developed biochemical filtering steps into the functioning of this biocatalytic cascade. New theoretical expressions are derived to allow simple, few-parameter modeling of network components concatenated in such cascades, both with and without filtering. The derived expressions are tested against experimental data obtained for the realized network's responses, measured optically, to variations of its input chemicals' concentrations with and without filtering processes. We also describe how the present modeling approach captures and explains several observations and features identified in earlier studies of enzymatic processes when they were considered as potential network components for multistep information/signal processing systems.
Theoretical size distribution of fossil taxa: analysis of a null model
Directory of Open Access Journals (Sweden)
Hughes Barry D
2007-03-01
Full Text Available Abstract Background This article deals with the theoretical size distribution (of number of sub-taxa of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.
Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling
DEFF Research Database (Denmark)
Zimmermann, Ralf; Bertram, Anna
2017-01-01
Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...... matrices for mutually distinct sample points. However, in applications, low-fidelity information is often available at high-fidelity sample points and the Cokriging predictor may benefit from the additional information provided by such an inclusive sampling. We investigate the positive definiteness...... by the method of maximum likelihood estimation. For standard Kriging, closed-form optima of the model parameters along hyper-parameter profile lines are known. Yet, these do not readily transfer to the setting of Cokriging, since additional parameters arise, which exhibit a mutual dependence. In previous work...
Digital clocks: simple Boolean models can quantitatively describe circadian systems
Akman, Ozgur E.; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J.; Ghazal, Peter
2012-01-01
The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day–night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we
Theoretical models for designing a 220-GHz folded waveguide backward wave oscillator
Cai, Jin-Chi; Hu, Lin-Lin; Ma, Guo-Wu; Chen, Hong-Bin; Jin, Xiao; Chen, Huai-Bi
2015-06-01
In this paper, the basic equations of beam-wave interaction for designing the 220 GHz folded waveguide (FW) backward wave oscillator (BWO) are described. On the whole, these equations are mainly classified into small signal model (SSM), large signal model (LSM), and simplified small signal model (SSSM). Using these linear and nonlinear one-dimensional (1D) models, the oscillation characteristics of the FW BWO of a given configuration of slow wave structure (SWS) can be calculated by numerical iteration algorithm, which is more time efficient than three-dimensional (3D) particle-in-cell (PIC) simulation. The SSSM expressed by analytical formulas is innovatively derived for determining the initial values of the FW SWS conveniently. The dispersion characteristics of the FW are obtained by equivalent circuit analysis. The space charge effect, the end reflection effect, the lossy wall effect, and the relativistic effect are all considered in our models to offer more accurate results. The design process of the FW BWO tube with output power of watt scale in a frequency range between 215 GHz and 225 GHz based on these 1D models is demonstrated. The 3D PIC method is adopted to verify the theoretical design results, which shows that they are in good agreement with each other. Project supported by the Innovative Research Foundation of China Academy of Engineering Physics (Grant No. 426050502-2).
Toward a comprehensive, theoretical model of compassion fatigue: An integrative literature review.
Coetzee, Siedine K; Laschinger, Heather K S
2017-11-20
This study was an integrative literature review in relation to compassion fatigue models, appraising these models, and developing a comprehensive theoretical model of compassion fatigue. A systematic search on PubMed, EbscoHost (Academic Search Premier, E-Journals, Medline, PsycINFO, Health Source Nursing/Academic Edition, CINAHL, MasterFILE Premier and Health Source Consumer Edition), gray literature, and manual searches of included reference lists was conducted in 2016. The studies (n = 11) were analyzed, and the strengths and limitations of the compassion fatigue models identified. We further built on these models through the application of the conservation of resources theory and the social neuroscience of empathy. The compassion fatigue model shows that it is not empathy that puts nurses at risk of developing compassion fatigue, but rather a lack of resources, inadequate positive feedback, and the nurse's response to personal distress. By acting on these three aspects, the risk of developing compassion fatigue can be addressed, which could improve the retention of a compassionate and committed nurse workforce. © 2017 John Wiley & Sons Australia, Ltd.
Directory of Open Access Journals (Sweden)
Czoli Christine
2011-10-01
Full Text Available Abstract Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories. These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed. Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment.
Lei, Tailong; Chen, Fu; Liu, Hui; Sun, Huiyong; Kang, Yu; Li, Dan; Li, Youyong; Hou, Tingjun
2017-07-03
As a dangerous end point, respiratory toxicity can cause serious adverse health effects and even death. Meanwhile, it is a common and traditional issue in occupational and environmental protection. Pharmaceutical and chemical industries have a strong urge to develop precise and convenient computational tools to evaluate the respiratory toxicity of compounds as early as possible. Most of the reported theoretical models were developed based on the respiratory toxicity data sets with one single symptom, such as respiratory sensitization, and therefore these models may not afford reliable predictions for toxic compounds with other respiratory symptoms, such as pneumonia or rhinitis. Here, based on a diverse data set of mouse intraperitoneal respiratory toxicity characterized by multiple symptoms, a number of quantitative and qualitative predictions models with high reliability were developed by machine learning approaches. First, a four-tier dimension reduction strategy was employed to find an optimal set of 20 molecular descriptors for model building. Then, six machine learning approaches were used to develop the prediction models, including relevance vector machine (RVM), support vector machine (SVM), regularized random forest (RRF), extreme gradient boosting (XGBoost), naïve Bayes (NB), and linear discriminant analysis (LDA). Among all of the models, the SVM regression model shows the most accurate quantitative predictions for the test set (q2ext = 0.707), and the XGBoost classification model achieves the most accurate qualitative predictions for the test set (MCC of 0.644, AUC of 0.893, and global accuracy of 82.62%). The application domains were analyzed, and all of the tested compounds fall within the application domain coverage. We also examined the structural features of the compounds and important fragments with large prediction errors. In conclusion, the SVM regression model and the XGBoost classification model can be employed as accurate prediction tools
Directory of Open Access Journals (Sweden)
Fatma E. El-Khouly
2017-10-01
Full Text Available Despite decades of clinical trials for diffuse intrinsic pontine glioma (DIPG, patient survival does not exceed 10% at two years post-diagnosis. Lack of benefit from systemic chemotherapy may be attributed to an intact bloodbrain barrier (BBB. We aim to develop a theoretical model including relevant physicochemical properties in order to review whether applied chemotherapeutics are suitable for passive diffusion through an intact BBB or whether local administration via convection-enhanced delivery (CED may increase their therapeutic potential. Physicochemical properties (lipophilicity, molecular weight, and charge in physiological environment of anticancer drugs historically and currently administered to DIPG patients, that affect passive diffusion over the BBB, were included in the model. Subsequently, the likelihood of BBB passage of these drugs was ascertained, as well as their potential for intratumoral administration via CED. As only non-molecularly charged, lipophilic, and relatively small sized drugs are likely to passively diffuse through the BBB, out of 51 drugs modeled, only 8 (15%—carmustine, lomustine, erlotinib, vismodegib, lenalomide, thalidomide, vorinostat, and mebendazole—are theoretically qualified for systemic administration in DIPG. Local administration via CED might create more therapeutic options, excluding only positively charged drugs and drugs that are either prodrugs and/or only available as oral formulation. A wide variety of drugs have been administered systemically to DIPG patients. Our model shows that only few are likely to penetrate the BBB via passive diffusion, which may partly explain the lack of efficacy. Drug distribution via CED is less dependent on physicochemical properties and may increase the therapeutic options for DIPG.
National Research Council Canada - National Science Library
Jerry Kudlats; William McDowell
2015-01-01
.... This research examines the literature on board composition and strategic decision making, and presents a theoretical model for board composition and firm performance in small and medium sized family firms...
Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model
Winslow, Brent D.; Nam Nguyen; Venta, Kimberly E.
2017-01-01
Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also ut...
The Red Queen model of recombination hot-spot evolution: a theoretical investigation.
Latrille, Thibault; Duret, Laurent; Lartillot, Nicolas
2017-12-19
In humans and many other species, recombination events cluster in narrow and short-lived hot spots distributed across the genome, whose location is determined by the Zn-finger protein PRDM9. To explain these fast evolutionary dynamics, an intra-genomic Red Queen model has been proposed, based on the interplay between two antagonistic forces: biased gene conversion, mediated by double-strand breaks, resulting in hot-spot extinction, followed by positive selection favouring new PRDM9 alleles recognizing new sequence motifs. Thus far, however, this Red Queen model has not been formalized as a quantitative population-genetic model, fully accounting for the intricate interplay between biased gene conversion, mutation, selection, demography and genetic diversity at the PRDM9 locus. Here, we explore the population genetics of the Red Queen model of recombination. A Wright-Fisher simulator was implemented, allowing exploration of the behaviour of the model (mean equilibrium recombination rate, diversity at the PRDM9 locus or turnover rate) as a function of the parameters (effective population size, mutation and erosion rates). In a second step, analytical results based on self-consistent mean-field approximations were derived, reproducing the scaling relations observed in the simulations. Empirical fit of the model to current data from the mouse suggests both a high mutation rate at PRDM9 and strong biased gene conversion on its targets.This article is part of the themed issue 'Evolutionary causes and consequences of recombination rate variation in sexual organisms'. © 2017 The Authors.
A new theoretical model for cooperation in public health settings: the RDIC model.
de Rijk, Angelique; van Raak, Arno; van der Made, Jan
2007-10-01
The Resource Dependence Institutional Cooperation (RDIC) model was constructed from four combined theories: network, organizational behavior, resource dependence, and new institutional. The authors developed the model in an effort to better understand cooperation in public health settings, and tested its validity in two different types of networks related to occupational health. Two qualitative studies were performed in the Netherlands. The first study included 11 respondents dealing with the sickness absence of 4 employees. The second study included 11 respondents from 5 organizations involved in developing sickness absence policy. Document analyses and semistructured interviews were performed. The results indicate that the RDIC model coincided with empirical patterns of cooperation in both types of networks. Though they recommend further empirical research, the authors conclude that the model appears to be a valid instrument for understanding cooperation. They assert that the RDIC model can facilitate the management of cooperation in various public health settings.
A suite of models to support the quantitative assessment of spread in pest risk analysis
Robinet, C.; Kehlenbeck, H.; Werf, van der W.
2012-01-01
In the frame of the EU project PRATIQUE (KBBE-2007-212459 Enhancements of pest risk analysis techniques) a suite of models was developed to support the quantitative assessment of spread in pest risk analysis. This dataset contains the model codes (R language) for the four models in the suite. Three
Pargett, Michael; Umulis, David M
2013-07-15
Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.
Statistical analysis of probabilistic models of software product lines with quantitative constraints
DEFF Research Database (Denmark)
Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto
2015-01-01
We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...
Defense of Cyber Infrastructures Against Cyber-Physical Attacks Using Game-Theoretic Models.
Rao, Nageswara S V; Poole, Stephen W; Ma, Chris Y T; He, Fei; Zhuang, Jun; Yau, David K Y
2016-04-01
The operation of cyber infrastructures relies on both cyber and physical components, which are subject to incidental and intentional degradations of different kinds. Within the context of network and computing infrastructures, we study the strategic interactions between an attacker and a defender using game-theoretic models that take into account both cyber and physical components. The attacker and defender optimize their individual utilities, expressed as sums of cost and system terms. First, we consider a Boolean attack-defense model, wherein the cyber and physical subinfrastructures may be attacked and reinforced as individual units. Second, we consider a component attack-defense model wherein their components may be attacked and defended, and the infrastructure requires minimum numbers of both to function. We show that the Nash equilibrium under uniform costs in both cases is computable in polynomial time, and it provides high-level deterministic conditions for the infrastructure survival. When probabilities of successful attack and defense, and of incidental failures, are incorporated into the models, the results favor the attacker but otherwise remain qualitatively similar. This approach has been motivated and validated by our experiences with UltraScience Net infrastructure, which was built to support high-performance network experiments. The analytical results, however, are more general, and we apply them to simplified models of cloud and high-performance computing infrastructures. © 2015 Society for Risk Analysis.
Universe in the theoretical model «Evolving matter»
Directory of Open Access Journals (Sweden)
Bazaluk Oleg
2013-04-01
Full Text Available The article critically examines modern model of the Universe evolution constructed by efforts of a group of scientists (mathematicians, physicists and cosmologists from the world's leading universities (Oxford and Cambridge Universities, Yale, Columbia, New York, Rutgers and the UC Santa Cruz. The author notes its strengths, but also points to shortcomings. Author believes that this model does not take into account the most important achievements in the field of biochemistry and biology (molecular, physical, developmental, etc., as well as neuroscience and psychology. Author believes that in the construction of model of the Universe evolution, scientists must take into account (with great reservations the impact of living and intelligent matter on space processes. As an example, the author gives his theoretical model "Evolving matter". In this model, he shows not only the general dependence of the interaction of cosmic processes with inert, living and intelligent matter, but also he attempts to show the direct influence of systems of living and intelligent matter on the acceleration of the Universe's expansion.
Applications of a theoretic model of information exposure to health interventions.
Donohew, L; Lorch, E P; Palmgreen, P
1998-03-01
This paper describes an individual-differences model of information exposure which reflects the needs for novelty and sensation likely inherited as survival behaviors from humankind's ancient past. The model grew out of an earlier activation model developed to explain exposure to information about public affairs. After the model's biological basis is explained, it is proposed as a theory in deductive nomological form. Propositions are then deduced from its central assumptions and a series of funded health communication studies for which it has provided guidance is described. Individual differences in the need for novelty form the basis for both identifying target audiences most likely to engage in health risk behaviors such as drug and alcohol use and risky sex, and guiding the design of appropriate and effective messages. Strategies developed which have been based upon the theoretical model have successfully induced attitudinal and behavioral changes in experimental studies. They have also reached at-risk audiences in field studies through televised public service announcements in appropriate television programming.
A theoretical-electron-density databank using a model of real and virtual spherical atoms.
Nassour, Ayoub; Domagala, Slawomir; Guillot, Benoit; Leduc, Theo; Lecomte, Claude; Jelsch, Christian
2017-08-01
A database describing the electron density of common chemical groups using combinations of real and virtual spherical atoms is proposed, as an alternative to the multipolar atom modelling of the molecular charge density. Theoretical structure factors were computed from periodic density functional theory calculations on 38 crystal structures of small molecules and the charge density was subsequently refined using a density model based on real spherical atoms and additional dummy charges on the covalent bonds and on electron lone-pair sites. The electron-density parameters of real and dummy atoms present in a similar chemical environment were averaged on all the molecules studied to build a database of transferable spherical atoms. Compared with the now-popular databases of transferable multipolar parameters, the spherical charge modelling needs fewer parameters to describe the molecular electron density and can be more easily incorporated in molecular modelling software for the computation of electrostatic properties. The construction method of the database is described. In order to analyse to what extent this modelling method can be used to derive meaningful molecular properties, it has been applied to the urea molecule and to biotin/streptavidin, a protein/ligand complex.
The phantom derivative method when a structure model is available: about its theoretical basis.
Burla, Maria Cristina; Cascarano, Giovanni Luca; Giacovazzo, Carmelo; Polidori, Giampiero
2017-05-01
This study clarifies why, in the phantom derivative (PhD) approach, randomly created structures can help in refining phases obtained by other methods. For this purpose the joint probability distribution of target, model, ancil and phantom derivative structure factors and its conditional distributions have been studied. Since PhD may use n phantom derivatives, with n ≥ 1, a more general distribution taking into account all the ancil and derivative structure factors has been considered, from which the conditional distribution of the target phase has been derived. The corresponding conclusive formula contains two components. The first is the classical Srinivasan & Ramachandran term, relating the phases of the target structure with the model phases. The second arises from the combination of two correlations: that between model and derivative (the first is a component of the second) and that between derivative and target. The second component mathematically codifies the information on the target phase arising from model and derivative electron-density maps. The result is new, and explains why a random structure, uncorrelated with the target structure, adds useful information on the target phases, provided a model structure is known. Some experimental tests aimed at checking if the second component really provides information on ϕ (the target phase) were performed; the favourable results confirm the correctness of the theoretical calculations and of the corresponding analysis.
MIP models for connected facility location: A theoretical and computational study.
Gollowitzer, Stefan; Ljubić, Ivana
2011-02-01
This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%.
MIP models for connected facility location: A theoretical and computational study☆
Gollowitzer, Stefan; Ljubić, Ivana
2011-01-01
This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366
Quantitative phase-field modeling for wetting phenomena.
Badillo, Arnoldo
2015-03-01
A new phase-field model is developed for studying partial wetting. The introduction of a third phase representing a solid wall allows for the derivation of a new surface tension force that accounts for energy changes at the contact line. In contrast to other multi-phase-field formulations, the present model does not need the introduction of surface energies for the fluid-wall interactions. Instead, all wetting properties are included in a unique parameter known as the equilibrium contact angle θeq. The model requires the solution of a single elliptic phase-field equation, which, coupled to conservation laws for mass and linear momentum, admits the existence of steady and unsteady compact solutions (compactons). The representation of the wall by an additional phase field allows for the study of wetting phenomena on flat, rough, or patterned surfaces in a straightforward manner. The model contains only two free parameters, a measure of interface thickness W and β, which is used in the definition of the mixture viscosity μ=μlϕl+μvϕv+βμlϕw. The former controls the convergence towards the sharp interface limit and the latter the energy dissipation at the contact line. Simulations on rough surfaces show that by taking values for β higher than 1, the model can reproduce, on average, the effects of pinning events of the contact line during its dynamic motion. The model is able to capture, in good agreement with experimental observations, many physical phenomena fundamental to wetting science, such as the wetting transition on micro-structured surfaces and droplet dynamics on solid substrates.
Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model
Directory of Open Access Journals (Sweden)
Brent D. Winslow
2017-04-01
Full Text Available Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also utilized subjective, self-reported sleep durations and were restricted to laboratory environments. In the current effort, we addressed these limitations by employing wearable systems and mobile applications to gather objective sleep information, assess multi-construct cognitive performance, and model/predict changes to mental acuity. Thirty participants were recruited for participation in the study, which lasted 1 week. Using the Fitbit Charge HR and a mobile version of the automated neuropsychological assessment metric called CogGauge, we gathered a series of features and utilized the unified model of performance to predict mental acuity based on sleep records. Our results suggest that individuals poorly rate their sleep duration, supporting the need for objective sleep metrics to model circadian changes to mental acuity. Participant compliance in using the wearable throughout the week and responding to the CogGauge assessments was 80%. Specific biases were identified in temporal metrics across mobile devices and operating systems and were excluded from the mental acuity metric development. Individualized prediction of mental acuity consistently outperformed group modeling. This effort indicates the feasibility of creating an individualized, mobile assessment and prediction of mental acuity, compatible with the majority of current mobile devices.
Exploiting linkage disequilibrium in statistical modelling in quantitative genomics
DEFF Research Database (Denmark)
Wang, Lei
Alleles at two loci are said to be in linkage disequilibrium (LD) when they are correlated or statistically dependent. Genomic prediction and gene mapping rely on the existence of LD between gentic markers and causul variants of complex traits. In the first part of the thesis, a novel method...... the recently proposed antedependence models, which treat neighbouring marker effects as correlated; another approach involves use of haplotype block information derived using the program Beagle. The overall conclusion is that taking LD information into account in genomic prediction models potentially improves...
Jones-Farrand, D. Todd; Fearer, Todd M.; Thogmartin, Wayne E.; Thompson, Frank R.; Nelson, Mark D.; Tirpak, John M.
2011-01-01
Selection of a modeling approach is an important step in the conservation planning process, but little guidance is available. We compared two statistical and three theoretical habitat modeling approaches representing those currently being used for avian conservation planning at landscape and regional scales: hierarchical spatial count (HSC), classification and regression tree (CRT), habitat suitability index (HSI), forest structure database (FS), and habitat association database (HA). We focused our comparison on models for five priority forest-breeding species in the Central Hardwoods Bird Conservation Region: Acadian Flycatcher, Cerulean Warbler, Prairie Warbler, Red-headed Woodpecker, and Worm-eating Warbler. Lacking complete knowledge on the distribution and abundance of each species with which we could illuminate differences between approaches and provide strong grounds for recommending one approach over another, we used two approaches to compare models: rank correlations among model outputs and comparison of spatial correspondence. In general, rank correlations were significantly positive among models for each species, indicating general agreement among the models. Worm-eating Warblers had the highest pairwise correlations, all of which were significant (P , 0.05). Red-headed Woodpeckers had the lowest agreement among models, suggesting greater uncertainty in the relative conservation value of areas within the region. We assessed model uncertainty by mapping the spatial congruence in priorities (i.e., top ranks) resulting from each model for each species and calculating the coefficient of variation across model ranks for each location. This allowed identification of areas more likely to be good targets of conservation effort for a species, those areas that were least likely, and those in between where uncertainty is higher and thus conservation action incorporates more risk. Based on our results, models developed independently for the same purpose
Pathways in coal thermolysis: a theoretical and experimental study with model compounds
Energy Technology Data Exchange (ETDEWEB)
Ekpenyong, I.A.; Virk, P.S.
1982-01-01
Fundamental aspects of coal thermolysis were investigated, including how the chemical structures of aromatics, hydroaromatics, and alcohols affect their reactivities as hydrogen donors and acceptors in coal processing. The susceptibilities of substructural entities in coals to fragmentation via a number of thermal pericyclic and free radical mechanisms were probed, as were the factors governing relative reactivities within series of such coal model compounds. The theoretical part of the work applied perturbation molecular orbital (PMO) and frontier orbital theories, in conjunction with ..pi..- and pseudo-..pi.. MO's, to the study of model compound reactivity. This enabled prediction of reactivity patterns of H-donors, H-acceptors and coal-like structures as functions of their ..pi..- and sigma-bond configurations, including heteroatomic effects. Experimentally, the liquid phase reactions of the coal model compound PhOCH/sub 2/Ph (Benzyl phenyl ether, BPE) were detailed for the first time in each of four hydronaphthalene H-donor solvents in the temperature range 220/sup 0/ to 300/sup 0/C. The thermolysis of BPE exhibited a pronounced dependence on solvent structure, both with respect to product selectivities and reaction kinetics. BPE thermolysis pathways were delineated as involving (a) rearrangement, leading to isomerization, (b) hydrogenations, leading ultimately to PhOH and PhCH/sub 3/ products, and (c) addition reactions, engendering heavy products. Pathways (b) and (c) are competitive and, in each, self-reactions of BPE-derivatives vie against reactions between these and the donor solvent. Of the detailed free radical and pericyclic reaction mechanisms postulated, the latter rationalized many more facets of the BPE results than the former. The theoretical and experimental results were appraised against previous coal thermolysis literature.
Plasmid stability analysis based on a new theoretical model employing stochastic simulations.
Directory of Open Access Journals (Sweden)
Olesia Werbowy
Full Text Available Here, we present a simple theoretical model to study plasmid stability, based on one input parameter which is the copy number of plasmids present in a host cell. The Monte Carlo approach was used to analyze random fluctuations affecting plasmid replication and segregation leading to gradual reduction in the plasmid population within the host cell. This model was employed to investigate maintenance of pEC156 derivatives, a high-copy number ColE1-type Escherichia coli plasmid that carries an EcoVIII restriction-modification system. Plasmid stability was examined in selected Escherichia coli strains (MG1655, wild-type; MG1655 pcnB, and hyper-recombinogenic JC8679 sbcA. We have compared the experimental data concerning plasmid maintenance with the simulations and found that the theoretical stability patterns exhibited an excellent agreement with those empirically tested. In our simulations, we have investigated the influence of replication fails (α parameter and uneven partition as a consequence of multimer resolution fails (δ parameter, and the post-segregation killing factor (β parameter. All of these factors act at the same time and affect plasmid inheritance at different levels. In case of pEC156-derivatives we concluded that multimerization is a major determinant of plasmid stability. Our data indicate that even small changes in the fidelity of segregation can have serious effects on plasmid stability. Use of the proposed mathematical model can provide a valuable description of plasmid maintenance, as well as enable prediction of the probability of the plasmid loss.
Ferguson, Eamonn
2013-01-01
This paper sets out the case that personality traits are central to health psychology. To achieve this, three aims need to be addressed. First, it is necessary to show that personality influences a broad range of health outcomes and mechanisms. Second, the simple descriptive account of Aim 1 is not sufficient, and a theoretical specification needs to be developed to explain the personality-health link and allow for future hypothesis generation. Third, once Aims 1 and 2 are met, it is necessary to demonstrate the clinical utility of personality. In this review I make the case that all three Aims are met. I develop a theoretical framework to understand the links between personality and health drawing on current theorising in the biology, evolution, and neuroscience of personality. I identify traits (i.e., alexithymia, Type D, hypochondriasis, and empathy) that are of particular concern to health psychology and set these within evolutionary cost-benefit analysis. The literature is reviewed within a three-level hierarchical model (individual, group, and organisational) and it is argued that health psychology needs to move from its traditional focus on the individual level to engage group and organisational levels. PMID:23772230
First principles pharmacokinetic modeling: A quantitative study on Cyclosporin
DEFF Research Database (Denmark)
Mošat', Andrej; Lueshen, Eric; Heitzig, Martina
2013-01-01
renal and hepatic clearances, elimination half-life, and mass transfer coefficients, to establish drug biodistribution dynamics in all organs and tissues. This multi-scale model satisfies first principles and conservation of mass, species and momentum.Prediction of organ drug bioaccumulation...
Hidden Markov Model for quantitative prediction of snowfall and ...
Indian Academy of Sciences (India)
than random forecast for both the days. The RMSE of the optimized model has also been found smaller than the persistence forecast and standard deviation for both the days. 1. Introduction. The Himalayan region, during winter is prone to severe weather due to large amount of snowfall. The snowfall occurs during ...
Essays on Quantitative Marketing Models and Monte Carlo Integration Methods
R.D. van Oest (Rutger)
2005-01-01
textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for
Quantitative modeling of selective lysosomal targeting for drug design
DEFF Research Database (Denmark)
Trapp, Stefan; Rosania, G.; Horobin, R.W.
2008-01-01
Lysosomes are acidic organelles and are involved in various diseases, the most prominent is malaria. Accumulation of molecules in the cell by diffusion from the external solution into cytosol, lysosome and mitochondrium was calculated with the Fick–Nernst–Planck equation. The cell model considers...
Quantitative phase-field model of alloy solidification
Echebarria, Blas; Folch, Roger; Karma, Alain; Plapp, Mathis
2004-12-01
We present a detailed derivation and thin interface analysis of a phase-field model that can accurately simulate microstructural pattern formation for low-speed directional solidification of a dilute binary alloy. This advance with respect to previous phase-field models is achieved by the addition of a phenomenological “antitrapping” solute current in the mass conservation relation [A. Karma, Phys. Rev. Lett. 87, 115701 (2001)]. This antitrapping current counterbalances the physical, albeit artificially large, solute trapping effect generated when a mesoscopic interface thickness is used to simulate the interface evolution on experimental length and time scales. Furthermore, it provides additional freedom in the model to suppress other spurious effects that scale with this thickness when the diffusivity is unequal in solid and liquid [R. F. Almgren, SIAM J. Appl. Math. 59, 2086 (1999)], which include surface diffusion and a curvature correction to the Stefan condition. This freedom can also be exploited to make the kinetic undercooling of the interface arbitrarily small even for mesoscopic values of both the interface thickness and the phase-field relaxation time, as for the solidification of pure melts [A. Karma and W.-J. Rappel, Phys. Rev. E 53, R3017 (1996)]. The performance of the model is demonstrated by calculating accurately within a phase-field approach the Mullins-Sekerka stability spectrum of a planar interface and nonlinear cellular shapes for realistic alloy parameters and growth conditions.
A quantitative risk model for early lifecycle decision making
Feather, M. S.; Cornford, S. L.; Dunphy, J.; Hicks, K.
2002-01-01
Decisions made in the earliest phases of system development have the most leverage to influence the success of the entire development effort, and yet must be made when information is incomplete and uncertain. We have developed a scalable cost-benefit model to support this critical phase of early-lifecycle decision-making.
Directory of Open Access Journals (Sweden)
Pablo Gutierrez
2001-04-01
Full Text Available Cry11Bb is an insecticidal crystal protein produced by Bacillus thuringiensis subsp. medellin during its stationary phase; this ¶-endotoxin is active against dipteran insects and has great potential for mosquito borne disease control. Here, we report the first theoretical model of the tridimensional structure of a Cry11 toxin. The tridimensional structure of the Cry11Bb toxin was obtained by homology modelling on the structures of the Cry1Aa and Cry3Aa toxins. In this work we give a brief description of our model and hypothesize the residues of the Cry11Bb toxin that could be important in receptor recognition and pore formation. This model will serve as a starting point for the design of mutagenesis experiments aimed to the improvement of toxicity, and to provide a new tool for the elucidation of the mechanism of action of these mosquitocidal proteins.
Quantitative phase-field model for phase transformations in multi-component alloys
Energy Technology Data Exchange (ETDEWEB)
Choudhury, Abhik Narayan
2013-08-01
Phase-field modeling has spread to a variety of applications involving phase transformations. While the method has wide applicability, derivation of quantitative predictions requires deeper understanding of the coupling between the system and model parameters. The present work highlights a novel phase-field model based on a grand-potential formalism allowing for an elegant and efficient solution to the problems in phase transformations. In particular, applications involving single and multi-phase, multi-component solidification have been investigated and a thorough study into the quantitative modeling of these problems have been examined.
Directory of Open Access Journals (Sweden)
Saleh Alwahaishi
2013-03-01
Full Text Available The world has changed a lot in the past years. The rapid advances in technology and the changing of the communication channels have changed the way people work and, for many, where do they work from. The Internet and mobile technology, the two most dynamic technological forces in modern information and communications technology (ICT are converging into one ubiquitous mobile Internet service, which will change our way of both doing business and dealing with our daily routine activities. As the use of ICT expands globally, there is need for further research into cultural aspects and implications of ICT. The acceptance of Information Technology (IT has become a fundamental part of the research plan for most organizations (Igbaria 1993. In IT research, numerous theories are used to understand users’ adoption of new technologies. Various models were developed including the Technology Acceptance Model, Theory of Reasoned Action, Theory of Planned Behavior, and recently, the Unified Theory of Acceptance and Use of Technology. Each of these models has sought to identify the factors which influence a citizen’s intention or actual use of information technology. Drawing on the UTAUT model and Flow Theory, this research composes a new hybrid theoretical framework to identify the factors affecting the acceptance and use of Mobile Internet -as an ICT application- in a consumer context. The proposed model incorporates eight constructs: Performance Expectancy, Effort Expectancy, Facilitating Conditions, Social Influences, Perceived Value, Perceived Playfulness, Attention Focus, and Behavioral intention. Data collected online from 238 respondents in Saudi Arabia were tested against the research model, using the structural equation modeling approach. The proposed model was mostly supported by the empirical data. The findings of this study provide several crucial implications for ICT and, in particular, mobile Internet service practitioners and researchers
Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models
Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.
1979-01-01
The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.
Bordon, Jure; Moskon, Miha; Zimic, Nikolaj; Mraz, Miha
2015-01-01
Quantitative modelling of biological systems has become an indispensable computational approach in the design of novel and analysis of existing biological systems. However, kinetic data that describe the system's dynamics need to be known in order to obtain relevant results with the conventional modelling techniques. These data are often hard or even impossible to obtain. Here, we present a quantitative fuzzy logic modelling approach that is able to cope with unknown kinetic data and thus produce relevant results even though kinetic data are incomplete or only vaguely defined. Moreover, the approach can be used in the combination with the existing state-of-the-art quantitative modelling techniques only in certain parts of the system, i.e., where kinetic data are missing. The case study of the approach proposed here is performed on the model of three-gene repressilator.
Quantitative properties of clustering within modern microscopic nuclear models
Energy Technology Data Exchange (ETDEWEB)
Volya, A. [Florida State University (United States); Tchuvil’sky, Yu. M., E-mail: tchuvl@nucl-th.sinp.msu.ru [Moscow State University, Skobelstsyn Institute of Nuclear Physics (Russian Federation)
2016-09-15
A method for studying cluster spectroscopic properties of nuclear fragmentation, such as spectroscopic amplitudes, cluster form factors, and spectroscopic factors, is developed on the basis of modern precision nuclear models that take into account the mixing of large-scale shell-model configurations. Alpha-cluster channels are considered as an example. A mathematical proof of the need for taking into account the channel-wave-function renormalization generated by exchange terms of the antisymmetrization operator (Fliessbach effect) is given. Examples where this effect is confirmed by a high quality of the description of experimental data are presented. By and large, the method in question extends substantially the possibilities for studying clustering phenomena in nuclei and for improving the quality of their description.
Quantitative Risk Modeling of Fire on the International Space Station
Castillo, Theresa; Haught, Megan
2014-01-01
The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.
Afference copy as a quantitative neurophysiological model for consciousness.
Cornelis, Hugo; Coop, Allan D
2014-06-01
Consciousness is a topic of considerable human curiosity with a long history of philosophical analysis and debate. We consider there is nothing particularly complicated about consciousness when viewed as a necessary process of the vertebrate nervous system. Here, we propose a physiological "explanatory gap" is created during each present moment by the temporal requirements of neuronal activity. The gap extends from the time exteroceptive and proprioceptive stimuli activate the nervous system until they emerge into consciousness. During this "moment", it is impossible for an organism to have any conscious knowledge of the ongoing evolution of its environment. In our schematic model, a mechanism of "afference copy" is employed to bridge the explanatory gap with consciously experienced percepts. These percepts are fabricated from the conjunction of the cumulative memory of previous relevant experience and the given stimuli. They are structured to provide the best possible prediction of the expected content of subjective conscious experience likely to occur during the period of the gap. The model is based on the proposition that the neural circuitry necessary to support consciousness is a product of sub/preconscious reflexive learning and recall processes. Based on a review of various psychological and neurophysiological findings, we develop a framework which contextualizes the model and briefly discuss further implications.
A theoretical reassessment of microbial maintenance and implications for microbial ecology modeling.
Wang, Gangsheng; Post, Wilfred M
2012-09-01
We attempted to reconcile three microbial maintenance models (Herbert, Pirt, and Compromise) through a theoretical reassessment. We provided a rigorous proof that the true growth yield coefficient (Y(G)) is the ratio of the specific maintenance rate (a in Herbert) to the maintenance coefficient (m in Pirt). Other findings from this study include: (1) the Compromise model is identical to the Herbert for computing microbial growth and substrate consumption, but it expresses the dependence of maintenance on both microbial biomass and substrate; (2) the maximum specific growth rate in the Herbert (μ(max,H)) is higher than those in the other two models (μ(max,P) and μ(max,C)), and the difference is the physiological maintenance factor (m(q) = a); and (3) the overall maintenance coefficient (m(T)) is more sensitive to m(q) than to the specific growth rate (μ(G)) and Y(G). Our critical reassessment of microbial maintenance provides a new approach for quantifying some important components in soil microbial ecology models. © This article is a US government work and is in the public domain in the USA.
How Do Trading Firms Upgrade Skills and Technology: A Theoretical Model
Directory of Open Access Journals (Sweden)
Mojca Lindic
2015-12-01
Full Text Available This paper studies the mechanisms of skill upgrading in trading firms by developing a theoretical model that relates the individual’s incentives for acquiring higher skills to the profit-maximizing behaviour of trading firms. The model shows that only the high ability individuals have incentives for acquiring higher skills, as long as they are compensated with higher wages after entering employment. Furthermore, high-productive firms have incentives for investing in higher technology, to employ high-skilled labour, and to engage in international trade. The decisions for technology dress-up and skill upgrading coincide with firm’s decisions to start importing and exporting as the latter requires higher technology and high-skilled labour. Contributions of the paper are twofold: gaining new insights by combining fragments of models on individual’s and firm’s behaviours, and broadening the content of the Melitz (2003 model by introducing importers and controlling for skilled and unskilled labour.
A novel theoretical model for the temperature dependence of band gap energy in semiconductors
Geng, Peiji; Li, Weiguo; Zhang, Xianhe; Zhang, Xuyao; Deng, Yong; Kou, Haibo
2017-10-01
We report a novel theoretical model without any fitting parameters for the temperature dependence of band gap energy in semiconductors. This model relates the band gap energy at the elevated temperature to that at the arbitrary reference temperature. As examples, the band gap energies of Si, Ge, AlN, GaN, InP, InAs, ZnO, ZnS, ZnSe and GaAs at temperatures below 400 K are calculated and are in good agreement with the experimental results. Meanwhile, the band gap energies at high temperatures (T > 400 K) are predicted, which are greater than the experimental results, and the reasonable analysis is carried out as well. Under low temperatures, the effect of lattice expansion on the band gap energy is very small, but it has much influence on the band gap energy at high temperatures. Therefore, it is necessary to consider the effect of lattice expansion at high temperatures, and the method considering the effect of lattice expansion has also been given. The model has distinct advantages compared with the widely quoted Varshni’s semi-empirical equation from the aspect of modeling, physical meaning and application. The study provides a convenient method to determine the band gap energy under different temperatures.
Anticipation in stuttering: A theoretical model of the nature of stutter prediction.
Garcia-Barrera, Mauricio A; Davidow, Jason H
2015-06-01
The fact that some people who stutter have the ability to anticipate a stuttering moment is essential for several theories of stuttering and important for maximum effectiveness of many currently used treatment techniques. The "anticipation effect," however, is poorly understood despite much investigation into this phenomenon. In the present paper, we combine (1) behavioral evidence from the stuttering-anticipation literature, (2) speech production models, and (3) models of error detection to propose a theoretical model of anticipation. Integrating evidence from theories such as Damasio's Somatic Marker Hypothesis, Levelt's Perceptual Monitoring Theory, Guenther's The Directions Into Velocities of Articulators (DIVA) model, Postma's Covert Repair Hypothesis, among others, our central thesis is that the anticipation of a stuttering moment occurs as an outcome of the interactions between previous learning experiences (i.e., learnt associations between stuttered utterances and any self-experienced or environmental consequence) and error monitoring. Possible neurological mechanisms involved in generating conscious anticipation are also discussed, along with directions for future research. The reader will be able to: (a) describe historical theories that explain how PWS may learn to anticipate stuttering; (b) state some traditional sources of evidence of anticipation in stuttering; (c) describe how PWS may be sensitive to the detection of a stuttering; (d) state some of the neural correlates that may underlie anticipation in stuttering; and (e) describe some of the possible utilities of incorporating anticipation into stuttering interventions. Copyright © 2015 Elsevier Inc. All rights reserved.
Chen, Yun; Yang, Hui
2016-12-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
Fisher, Gary A.
2013-01-01
A mixed method study explored a theoretical model that employed, combined, and added to the theories of self-determination, the reading engagement perspective, and the four-phase model of interest to motivate adolescent struggling readers to read for pleasure. The model adds to the existing body of research because it specifies an instructional…
Theoretical model estimation of guest diffusion in Metal-Organic Frameworks (MOFs)
Zheng, Bin
2015-08-11
Characterizing molecule diffusion in nanoporous matrices is critical to understanding the novel chemical and physical properties of metal-organic frameworks (MOFs). In this paper, we developed a theoretical model to fastly and accurately compute the diffusion rate of guest molecules in a zeolitic imidazolate framework-8 (ZIF-8). The ideal gas or equilibrium solution diffusion model was modified to contain the effect of periodical media via introducing the possibility of guests passing through the framework gate. The only input in our model is the energy barrier of guests passing through the MOF’s gate. Molecular dynamics (MD) methods were employed to gather the guest density profile, which then was used to deduce the energy barrier values. This produced reliable results that require a simulation time of 5 picoseconds, which is much shorter when using pure MD methods (in the billisecond scale) . Also, we used density functional theory (DFT) methods to obtain the energy profile of guests passing through gates, as this does not require specification of a force field for the MOF degrees of freedom. In the DFT calculation, we only considered one gate of MOFs each time; as this greatly reduced the computational cost. Based on the obtained energy barrier values we computed the diffusion rate of alkane and alcohol in ZIF-8 using our model, which was in good agreement with experimental test results and the calculation values from standard MD model. Our model shows the advantage of obtaining accurate diffusion rates for guests in MOFs for a lower computational cost and shorter calculation time. Thus, our analytic model calculation is especially attractive for high-throughput computational screening of the dynamic performance of guests in a framework.
A Theoretical Model for Digital Reverberations of City Spaces and Public Places
DEFF Research Database (Denmark)
Zimmerman, Chris; Hansen, Kjeld; Vatrapu, Ravi
2014-01-01
The increasing pervasiveness of Internet connected devices and services is altering the perception and practice of public spaces through the provisioning of location-specific digital information. Location-aware technologies allow people to access, annotate, address and attach information...... to locations, which transforms the space for other people who use the same services. Such locations acquire relevance and reshape social and spatial interactions through increased use on social media as people ‘check-in’ to places, photograph or ‘like’ them. Collectively the authors are marking-up the city...... and Mobile technologies that describe and explain crowd-sourced socio-technical layers on the city landscape. The proposed integrated theoretical model describes the relevant information linkages between people and places in the online and offline worlds and introduces a new evaluation method...
A second gradient theoretical framework for hierarchical multiscale modeling of materials
Energy Technology Data Exchange (ETDEWEB)
Luscher, Darby J [Los Alamos National Laboratory; Bronkhorst, Curt A [Los Alamos National Laboratory; Mc Dowell, David L [GEORGIA TECH
2009-01-01
A theoretical framework for the hierarchical multiscale modeling of inelastic response of heterogeneous materials has been presented. Within this multiscale framework, the second gradient is used as a non local kinematic link between the response of a material point at the coarse scale and the response of a neighborhood of material points at the fine scale. Kinematic consistency between these scales results in specific requirements for constraints on the fluctuation field. The wryness tensor serves as a second-order measure of strain. The nature of the second-order strain induces anti-symmetry in the first order stress at the coarse scale. The multiscale ISV constitutive theory is couched in the coarse scale intermediate configuration, from which an important new concept in scale transitions emerges, namely scale invariance of dissipation. Finally, a strategy for developing meaningful kinematic ISVs and the proper free energy functions and evolution kinetics is presented.
Witkiewitz, Katie; Bowen, Sarah; Harrop, Erin N; Douglas, Haley; Enkema, Matthew; Sedgwick, Carly
2014-04-01
Mindfulness-based treatments are growing in popularity among addiction treatment providers, and several studies suggest the efficacy of incorporating mindfulness practices into the treatment of addiction, including the treatment of substance use disorders and behavioral addictions (i.e., gambling). The current paper provides a review of theoretical models of mindfulness in the treatment of addiction and several hypothesized mechanisms of change. We provide an overview of mindfulness-based relapse prevention (MBRP), including session content, treatment targets, and client feedback from participants who have received MBRP in the context of empirical studies. Future research directions regarding operationalization and measurement, identifying factors that moderate treatment effects, and protocol adaptations for specific populations are discussed.
Ultrasonic transit-time flowmeters modelled with theoretical velocity profiles: methodology
Moore, Pamela I.; Brown, Gregor J.; Stimpson, Brian P.
2000-12-01
Fully developed flow is well defined for most values of Reynolds number but distorted flow is not. Velocity profile is the definition given to the distribution of velocity in the axial direction over the cross-section of the pipe. This distribution is not usually uniform and can vary dramatically depending on the properties of the fluid and the configuration of the pipe in which it flows. Ultrasonic flowmeters are affected by such distortions in the flow profile, often resulting in erroneous measurements. Transit-time ultrasonic flowmeters are widely used in industry in distorted fluid flows, therefore correction to or prediction of distorted profiles has sparked great interest in the design and application of ultrasonic flowmeters. This document describes a method for modelling and analysing the effect of theoretical asymmetric flow profiles on ultrasonic flowmeters of the transit-time type, thus allowing an understanding of installation effects.
Directory of Open Access Journals (Sweden)
Florencia Stelzer
2014-01-01
Full Text Available Executive functions (EF have been defined as a series of higher-order cognitive processes which allow the control of thought, behavior and affection according to the achievement of a goal. Such processes present a lengthy postnatal development which matures completely by the end of adolescence. In this article we make a review of some of the main models of EF development during childhood. The aim of this work is to describe the state of the art related to the topic, identifying the main theoretical difficulties and methodological limitations associated with the different proposed paradigms. Finally, some suggestions are given to cope with such difficulties, emphasizing that the development of an ontology of EF could be a viable alternative to counter them. We believe that futture researches should guide their efforts toward the development of that ontology.
Modeling an Application's Theoretical Minimum and Average Transactional Response Times
Energy Technology Data Exchange (ETDEWEB)
Paiz, Mary Rose [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-04-01
The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.
Energy Technology Data Exchange (ETDEWEB)
Mantsinen, M. [Helsinki Univ. of Technology, Espoo (Finland). Dept. of Technical Physics
1999-06-01
Heating with electromagnetic waves in the ion cyclotron range of frequencies (ICRF) is a well-established method for auxiliary heating of present-day tokamak plasmas and is envisaged as one of the main heating techniques for the International Thermonuclear Experimental Reactor (ITER) and future reactor plasmas. In order to predict the performance of ICRF heating in future machines, it is important to benchmark present theoretical modelling with experimental results on present tokamaks. This thesis reports on development and experimental evaluation of theoretical models for ICRF heating at the Joint European Torus (JET). Several ICRF physics effects and scenarios have been studied. Direct importance to the ITER is the theoretical analysis of ICRF heating experiments with deuterium-tritium (D-T) plasmas. These experiments clearly demonstrate the potential of ICRF heating for auxiliary heating of reactor plasmas. In particular, scenarios with potential for good bulk ion heating and enhanced D-T fusion reactivity have been identified. Good bulk ion heating is essential for reactor plasmas in order to obtain a high ion temperature and a high fusion reactivity. In JET good bulk ion heating with ICRF waves has been achieved in high-performance discharges by adding ICRF heating to neutral beam injection. In these experiments, as in other JET discharges where damping at higher harmonics of the ion cyclotron frequency takes place, so-called finite Larmor radius (FLR) effects play an important role. Due to FLR effects, the resonating ion velocity distribution function can have a strong influence on the power deposition. Evidence for this effect has been obtained from the third harmonic deuterium heating experiments. Because of FLR effects, the wave-particle interaction can also become weak at certain ion energies, which prevents resonating ions from reaching higher energies. When interacting with the wave, an ion receives not only a change in energy but also a change in
Adulthood Social Class and Union Interest: A First Test of a Theoretical Model.
Mellor, Steven
2016-10-02
A serial mediation model of union interest was tested. Based on theoretical notes provided by Mellor and Golay (in press), adulthood social class was positioned as a predictor of willingness to join a labor union, with success/failure attributions at work and willingness to share work goals positioned as intervening variables. Data from U.S. nonunion employees (N = 560) suggested full mediation after effects were adjusted for childhood social class. In sequence, adulthood social class predicted success/failure attributions at work, success/failure attributions at work predicted willingness to share work goals, and willingness to share work goals predicted willingness to join. Implications for socioeconomic status (SES) research and union expansion are discussed.
Accidental naturalism: criticism of a theoretical model of socio-ecological legitimacy
Directory of Open Access Journals (Sweden)
Santiago M. Cruzada
2017-11-01
Full Text Available This article proposes the need for a theoretical review on the current epistemological assumption that establishes the dichotomy nature-society as a cornerstone of a broad worldview for western contexts. We will discuss the anthropological perspectives that assume that in these spaces, generically without nuances, social practice and ideas are not constructed in such a close relationship to the environment, falling under a belief that nature exists outside the human will. We will debate the naive ethnological essentialism that position naturalism as a central model of a socio-european worldview, characterized by dualistic patterns that have enabled monistic paradigms of socio-ecological relationships to be established at the same time, and in contrast to this, in other parts of the world.
Concentric Coplanar Capacitive Sensor System with Quantitative Model
Bowler, Nicola (Inventor); Chen, Tianming (Inventor)
2014-01-01
A concentric coplanar capacitive sensor includes a charged central disc forming a first electrode, an outer annular ring coplanar with and outer to the charged central disc, the outer annular ring forming a second electrode, and a gap between the charged central disc and the outer annular ring. The first electrode and the second electrode may be attached to an insulative film. A method provides for determining transcapacitance between the first electrode and the second electrode and using the transcapacitance in a model that accounts for a dielectric test piece to determine inversely the properties of the dielectric test piece.
Quantitative description of realistic wealth distributions by kinetic trading models
Lammoglia, Nelson; Muñoz, Víctor; Rogan, José; Toledo, Benjamín; Zarama, Roberto; Valdivia, Juan Alejandro
2008-10-01
Data on wealth distributions in trading markets show a power law behavior x-(1+α) at the high end, where, in general, α is greater than 1 (Pareto’s law). Models based on kinetic theory, where a set of interacting agents trade money, yield power law tails if agents are assigned a saving propensity. In this paper we are solving the inverse problem, that is, in finding the saving propensity distribution which yields a given wealth distribution for all wealth ranges. This is done explicitly for two recently published and comprehensive wealth datasets.
Quantitative Models of Imperfect Deception in Network Security using Signaling Games with Evidence
Pawlick, Jeffrey; Zhu, Quanyan
2017-01-01
Deception plays a critical role in many interactions in communication and network security. Game-theoretic models called "cheap talk signaling games" capture the dynamic and information asymmetric nature of deceptive interactions. But signaling games inherently model undetectable deception. In this paper, we investigate a model of signaling games in which the receiver can detect deception with some probability. This model nests traditional signaling games and complete information Stackelberg ...
Directory of Open Access Journals (Sweden)
Igor Shuryak
Full Text Available Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1 bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA; (2 fungi isolated from the Chernobyl nuclear-power plant (Ukraine buildings after the accident; (3 yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s among biologically-plausible alternatives. Our analysis suggests the following: (1 Both radionuclides and co-occurring chemical contaminants (e.g. NO2 are important for explaining microbial responses to radioactive contamination. (2 Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3 The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4 Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1 the most severe effects (e.g. extinction on microbial populations may occur when unfavorable environmental
Forsberg, Simon K. G.; Bloom, Joshua S.; Sadhu, Meru J.; Kruglyak, Leonid; Carlborg, ?rjan
2017-01-01
Experiments in model organisms report abundant genetic interactions underlying biologically important traits, whereas quantitative genetics theory predicts, and data support, that most genetic variance in populations is additive. Here we describe networks of capacitating genetic interactions that contribute to quantitative trait variation in a large yeast intercross population. The additive variance explained by individual loci in a network is highly dependent on the allele frequencies of the...
Chude-Okonkwo, Uche A. K.; Malekian, Reza; Maharaj, B. T.
2015-12-01
Inspired by biological systems, molecular communication has been proposed as a new communication paradigm that uses biochemical signals to transfer information from one nano device to another over a short distance. The biochemical nature of the information transfer process implies that for molecular communication purposes, the development of molecular channel models should take into consideration diffusion phenomenon as well as the physical/biochemical kinetic possibilities of the process. The physical and biochemical kinetics arise at the interfaces between the diffusion channel and the transmitter/receiver units. These interfaces are herein termed molecular antennas. In this paper, we present the deterministic propagation model of the molecular communication between an immobilized nanotransmitter and nanoreceiver, where the emission and reception kinetics are taken into consideration. Specifically, we derived closed-form system-theoretic models and expressions for configurations that represent different communication systems based on the type of molecular antennas used. The antennas considered are the nanopores at the transmitter and the surface receptor proteins/enzymes at the receiver. The developed models are simulated to show the influence of parameters such as the receiver radius, surface receptor protein/enzyme concentration, and various reaction rate constants. Results show that the effective receiver surface area and the rate constants are important to the system's output performance. Assuming high rate of catalysis, the analysis of the frequency behavior of the developed propagation channels in the form of transfer functions shows significant difference introduce by the inclusion of the molecular antennas into the diffusion-only model. It is also shown that for t > > 0 and with the information molecules' concentration greater than the Michaelis-Menten kinetic constant of the systems, the inclusion of surface receptors proteins and enzymes in the models
Nonsmooth optimization approaches to VDA of models with on/off parameterizations: Theoretical issues
Zhu, J.; Kamachi, M.; Zhou, G. Q.
2002-05-01
Some variational data assimilation problems of time- and space-discrete models with on/off parameterizations; can be regarded as nonsmooth optimization problems. Some theoretical issues related to those problems is systematically addressed, One of the basic concept in nonsmooth optimization is subgradient, a generalized notation of a gradient of the cost function. First it is shown that the concept of subgradient leads to a clear definition of the adjoint variables in the conventional adjoint model at singular points caused by on/off switches, Using an illustrated example of a multi-layer diffusion model with the convective adjustment, it is proved that the solution of the conventional adjoint model can not be interpreted as Gateaux derivatives or directional derivatives, at singular points, but can be interpreted as a subgradient of the cost function. Two existing smooth optimization approaches are then reviewed which are used in current data assimilation practice, The first approach is the conventional adjoint model plus smooth optimization algorithms. Some conditions under which the approach can converge to the minimal are discussed, Another approach is smoothing and regularization approach, which removes some thresholds in physical parameterizations. Two nonsmooth optimization approaches are also reviewed, One is the subgradient method, which uses the conventional adjoint model. The method is convergent, but very slow. Another approach, the bundle methods are more efficient. The main idea of the bundle method is to use the minimal norm vector of subdifferential, which is the convex hull of all subgradients, as the descent director. However finding all subgradients is very difficult in general, Therefore bundle methods are modified to use only one subgradient that can be calculated by the conventional adjoint model. In order to develop an efficient bundle method, a set-valued adjoint model, as a generalization of the conventional adjoint model, is proposed, It
Directory of Open Access Journals (Sweden)
Marko Hell
2014-03-01
Full Text Available This paper presents a highly formalized approach to strategy formulation and optimization of strategic performance through proper resource allocation. A stochastic quantitative model of strategic performance (SQMSP is used to evaluate the efficiency of the strategy developed. The SQMSP follows the theoretical notions of the balanced scorecard (BSC and strategy map methodologies, initially developed by Kaplan and Norton. Parameters of the SQMSP are suggested to be random variables and be evaluated by experts who give two-point (optimistic and pessimistic values and three-point (optimistic, most probable and pessimistic values evaluations. The Monte-Carlo method is used to simulate strategic performance. Having been implemented within a computer application and applied to solve the real problem (planning of an IT-strategy at the Faculty of Economics, University of Split the proposed approach demonstrated its high potential as a basis for development of decision support tools related to strategic planning.
Theoretical models of Rashba spin splitting in asymmetric SrTiO3-based heterostructures
van Heeringen, L. W.; McCollam, A.; de Wijs, G. A.; Fasolino, A.
2017-04-01
Rashba spin splitting in two-dimensional (2D) semiconductor systems is generally calculated in a k .p Luttinger-Kohn approach where the spin splitting due to asymmetry emerges naturally from the bulk band structure. In recent years, several new classes of 2D systems have been discovered where electronic correlations are believed to have an important role. In these correlated systems, the effects of asymmetry leading to Rashba splitting have typically been treated phenomenologically. We compare these two approaches for the case of 2D electron systems in SrTiO3-based heterostructures, and find that the two models produce fundamentally different behavior in regions of the Brillouin zone that are particularly relevant for magnetotransport. Our results demonstrate the importance of identifying the correct approach in the quantitative interpretation of experimental data, and are likely to be relevant to a range of 2D systems in correlated materials.
Rothes, Inês Areal; Henriques, Margarida Rangel
2017-12-01
In a help relation with a suicidal person, the theoretical models of suicidality can be essential to guide the health professional's comprehension of the client/patient. The objectives of this study were to identify health professionals' explanations of suicidal behaviors and to study the effects of professional group, theoretical intervention models, and patient suicide experience in professionals' representations. Two hundred and forty-two health professionals filled out a self-report questionnaire. Exploratory principal components analysis was used. Five explanatory models were identified: psychological suffering, affective cognitive, sociocommunicational, adverse life events, and psychopathological. Results indicated that the psychological suffering and psychopathological models were the most valued by the professionals, while the sociocommunicational was seen as the least likely to explain suicidal behavior. Differences between professional groups were found. We concluded that training and reflection on theoretical models in general and in communicative issues in particular are needed in the education of health professionals.
A quantitative and dynamic model for plant stem cell regulation.
Directory of Open Access Journals (Sweden)
Florian Geier
Full Text Available Plants maintain pools of totipotent stem cells throughout their entire life. These stem cells are embedded within specialized tissues called meristems, which form the growing points of the organism. The shoot apical meristem of the reference plant Arabidopsis thaliana is subdivided into several distinct domains, which execute diverse biological functions, such as tissue organization, cell-proliferation and differentiation. The number of cells required for growth and organ formation changes over the course of a plants life, while the structure of the meristem remains remarkably constant. Thus, regulatory systems must be in place, which allow for an adaptation of cell proliferation within the shoot apical meristem, while maintaining the organization at the tissue level. To advance our understanding of this dynamic tissue behavior, we measured domain sizes as well as cell division rates of the shoot apical meristem under various environmental conditions, which cause adaptations in meristem size. Based on our results we developed a mathematical model to explain the observed changes by a cell pool size dependent regulation of cell proliferation and differentiation, which is able to correctly predict CLV3 and WUS over-expression phenotypes. While the model shows stem cell homeostasis under constant growth conditions, it predicts a variation in stem cell number under changing conditions. Consistent with our experimental data this behavior is correlated with variations in cell proliferation. Therefore, we investigate different signaling mechanisms, which could stabilize stem cell number despite variations in cell proliferation. Our results shed light onto the dynamic constraints of stem cell pool maintenance in the shoot apical meristem of Arabidopsis in different environmental conditions and developmental states.
A new theoretical framework for modeling respiratory protection based on the beta distribution.
Klausner, Ziv; Fattal, Eyal
2014-08-01
The problem of modeling respiratory protection is well known and has been dealt with extensively in the literature. Often the efficiency of respiratory protection is quantified in terms of penetration, defined as the proportion of an ambient contaminant concentration that penetrates the respiratory protection equipment. Typically, the penetration modeling framework in the literature is based on the assumption that penetration measurements follow the lognormal distribution. However, the analysis in this study leads to the conclusion that the lognormal assumption is not always valid, making it less adequate for analyzing respiratory protection measurements. This work presents a formulation of the problem from first principles, leading to a stochastic differential equation whose solution is the probability density function of the beta distribution. The data of respiratory protection experiments were reexamined, and indeed the beta distribution was found to provide the data a better fit than the lognormal. We conclude with a suggestion for a new theoretical framework for modeling respiratory protection. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Evaluating Supply Chain Management: A Methodology Based on a Theoretical Model
Directory of Open Access Journals (Sweden)
Alexandre Tadeu Simon
2015-01-01
Full Text Available Despite the increasing interest in supply chain management (SCM by researchers and practitioners, there is still a lack of academic literature concerning topics such as methodologies to guide and support SCM evaluation. Most developed methodologies have been provided by consulting companies and are restricted in their publication and use. This article presents a methodology for evaluating companies’ degree of adherence to a SCM conceptual model. The methodology is based on Cooper, Lambert and Pagh’s original contribution and involves analysis of eleven referential axes established from key business processes, horizontal structures, and initiatives & practices. We analyze the applicability of the proposed model based on findings from interviews with experts - academics and practitioners - as well as from case studies of three focal firms and their supply chains. In general terms, the methodology can be considered a diagnostic instrument that allows companies to evaluate their maturity regarding SCM practices. From this diagnosis, firms can identify and implement activities to improve degree of adherence to the reference model and achieve SCM benefits. The methodology aims to contribute to SCM theory development. It is an initial, but structured, reference for translating a theoretical approach into practical aspects.
Komjathy, Attila; Zavorotny, Valery U.; Axelrad, Penina; Born, George H.; Garrison, James L.
2000-01-01
Global Positioning System (GPS) signals reflected from the ocean surface have potential use for various remote sensing purposes. Some possibilities arc measurements of surface roughness characteristics from which ware height, wind speed, and direction could be determined. For this paper, GPS-reflected signal measurements collected at aircraft altitudes of 2 km to 5 km with a delay-Doppler mapping GPS receiver arc used to explore the possibility of determining wind speed. To interpret the GPS data, a theoretical model has been developed that describes the power of the reflected GPS signals for different time delays and Doppler frequencies as a function of geometrical and environmental parameters. The results indicate a good agreement between the measured and the modeled normalized signal power waveforms during changing surface wind conditions. The estimated wind speed using surface- reflected GPS data, obtained by comparing actual and modeled waveforms, shows good agreement (within 2 m/s) with data obtained from a nearby buoy and independent wind speed measurements derived from the TOPEX/Poseidon altimetric satellite.
Induced airflow in flying insects I. A theoretical model of the induced flow.
Sane, Sanjay P
2006-01-01
A strong induced flow structure envelops the body of insects and birds during flight. This flow influences many physiological processes including delivery of odor and mechanical stimuli to the sensory organs, as well as mass flow processes including heat loss and gas exchange in flying animals. With recent advances in near-field aerodynamics of insect and bird flight, it is now possible to determine how wing kinematics affects induced flow over their body. In this paper, I develop a theoretical model based in rotor theory to estimate the mean induced flow over the body of flapping insects. This model is able to capture some key characteristics of mean induced flow over the body of a flying insect. Specifically, it predicts that induced flow is directly proportional to wing beat frequency and stroke amplitude and is also affected by a wing shape dependent parameter. The derivation of induced flow includes the determination of spanwise variation of circulation on flapping wings. These predictions are tested against the available data on the spanwise distribution of aerodynamic circulation along finite Drosophila melanogaster wings and mean flows over the body of Manduca sexta. To explicitly account for tip losses in finite wings, a formula previously proposed by Prandtl for a finite blade propeller system is tentatively included. Thus, the model described in this paper allows us to estimate how far-field flows are influenced by near-field events in flapping flight.
Overcoming barriers in care for the dying: Theoretical analysis of an innovative program model.
Wallace, Cara L
2016-08-01
This article explores barriers to end-of-life (EOL) care (including development of a death denying culture, ongoing perceptions about EOL care, poor communication, delayed access, and benefit restrictions) through the theoretical lens of symbolic interactionism (SI), and applies general systems theory (GST) to a promising practice model appropriate for addressing these barriers. The Compassionate Care program is a practice model designed to bridge gaps in care for the dying and is one example of a program offering concurrent care, a recent focus of evaluation though the Affordable Care Act. Concurrent care involves offering curative care alongside palliative or hospice care. Additionally, the program offers comprehensive case management and online resources to enrollees in a national health plan (Spettell et al., 2009).SI and GST are compatible and interrelated theories that provide a relevant picture of barriers to end-of-life care and a practice model that might evoke change among multiple levels of systems. These theories promote insight into current challenges in EOL care, as well as point to areas of needed research and interventions to address them. The article concludes with implications for policy and practice, and discusses the important role of social work in impacting change within EOL care.
Simulation of Cellular Energy Restriction in Quiescence (ERiQ)-A Theoretical Model for Aging.
Alfego, David; Kriete, Andres
2017-12-12
Cellular responses to energy stress involve activation of pro-survival signaling nodes, compensation in regulatory pathways and adaptations in organelle function. Specifically, energy restriction in quiescent cells (ERiQ) through energetic perturbations causes adaptive changes in response to reduced ATP, NAD+ and NADP levels in a regulatory network spanned by AKT, NF-κB, p53 and mTOR. Based on the experimental ERiQ platform, we have constructed a minimalistic theoretical model consisting of feedback motifs that enable investigation of stress-signaling pathways. The computer simulations reveal responses to acute energetic perturbations, promoting cellular survival and recovery to homeostasis. We speculated that the very same stress mechanisms are activated during aging in post-mitotic cells. To test this hypothesis, we modified the model to be deficient in protein damage clearance and demonstrate the formation of energy stress. Contrasting the network's pro-survival role in acute energetic challenges, conflicting responses in aging disrupt mitochondrial maintenance and contribute to a lockstep progression of decline when chronically activated. The model was analyzed by a local sensitivity analysis with respect to lifespan and makes predictions consistent with inhibitory and gain-of-function experiments in aging.
Poole, Laura
2011-01-01
The acquisition and retention of both basic life support (BLS) theoretical knowledge and practical skills are regarded as essential for all healthcare professionals; including student nurses. Whilst there appears to be an abundance of literature regarding the practical and theoretical BLS competency of registered nurses, this appears to be less well documented amongst the student nurse population. The purpose of this study was to therefore, add to the existing body of knowledge regarding the ...
A quantitative model of population pressure and its potential use in development planning.
Soemarwoto, O
1985-12-01
An attempt is made to develop a quantitative model of the concept of population pressure, using the example of population pressure on land resources in agricultural societies. "The model shows that environmental quality is tied to population growth and that population pressure does not bear relationship with population density." The implications of the findings for development planning are considered. (summary in IND) excerpt
Quantitative hardware prediction modeling for hardware/software co-design
Meeuws, R.J.
2012-01-01
Hardware estimation is an important factor in Hardware/Software Co-design. In this dissertation, we present the Quipu Modeling Approach, a high-level quantitative prediction model for HW/SW Partitioning using statistical methods. Our approach uses linear regression between software complexity
Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van
1999-01-01
A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by
Kashyap, Upasana; Mathew, Santhosh
2017-01-01
The purpose of this study was to compare students' performances in a freshmen level quantitative reasoning course (QR) under three different instructional models. A cohort of 155 freshmen students was placed in one of the three models: needing a prerequisite course, corequisite (students enroll simultaneously in QR course and a course that…
Protopapa, Maria L
2009-02-01
Composite films consisting of a ceramic matrix with embedded metal nanoparticles have received increased interest due to their numerous potential applications in the field of optics and optoelectronics. Numerous studies have been dedicated to the fabrication of these composite materials and it has been shown that nanocermet films can be obtained by successive deposition of alternate dielectric and metal films of thicknesses opportunely chosen. In this case, stacks of dielectric layers alternated with layers of metal nanoclusters (NCs) are obtained. However, until now, optical characterization of these kinds of multilayer stack has been used to retrieve mainly qualitative information on the dimension, shape, and geometric distribution of nanoparticles inside the dielectric matrix. An easy-to-handle model that quantitatively links the optical properties to the main features of the NCs embedded in the matrix is presented. This model can be applied to multilayer stacks of dielectric layers alternated with metal NC layers and is shown to be a valid alternative to a recently published model [Nanotechnology 19, 125709 (2008)NNOTER0957-448410.1088/0957-4484/19/22/225302] that was applied to the case of a three-layer structure (dielectric/metal:dielectric/dielectric).
Impact of implementation choices on quantitative predictions of cell-based computational models
Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.
2017-09-01
'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.
Directory of Open Access Journals (Sweden)
Bojanc Rok
2012-11-01
Full Text Available The paper presents a mathematical model for the optimal security-technology investment evaluation and decision-making processes based on the quantitative analysis of security risks and digital asset assessments in an enterprise. The model makes use of the quantitative analysis of different security measures that counteract individual risks by identifying the information system processes in an enterprise and the potential threats. The model comprises the target security levels for all identified business processes and the probability of a security accident together with the possible loss the enterprise may suffer. The selection of security technology is based on the efficiency of selected security measures. Economic metrics are applied for the efficiency assessment and comparative analysis of different protection technologies. Unlike the existing models for evaluation of the security investment, the proposed model allows direct comparison and quantitative assessment of different security measures. The model allows deep analyses and computations providing quantitative assessments of different options for investments, which translate into recommendations facilitating the selection of the best solution and the decision-making thereof. The model was tested using empirical examples with data from real business environment.
Quantitative plant resistance in cultivar mixtures: wheat yellow rust as a modeling case study.
Sapoukhina, Natalia; Paillard, Sophie; Dedryver, Françoise; de Vallavieille-Pope, Claude
2013-11-01
Unlike qualitative plant resistance, which confers immunity to disease, quantitative resistance confers only a reduction in disease severity and this can be nonspecific. Consequently, the outcome of its deployment in cultivar mixtures is not easy to predict, as on the one hand it may reduce the heterogeneity of the mixture, but on the other it may induce competition between nonspecialized strains of the pathogen. To clarify the principles for the successful use of quantitative plant resistance in disease management, we built a parsimonious model describing the dynamics of competing pathogen strains spreading through a mixture of cultivars carrying nonspecific quantitative resistance. Using the parameterized model for a wheat-yellow rust system, we demonstrate that a more effective use of quantitative resistance in mixtures involves reinforcing the effect of the highly resistant cultivars rather than replacing them. We highlight the fact that the judicious deployment of the quantitative resistance in two- or three-component mixtures makes it possible to reduce disease severity using only small proportions of the highly resistant cultivar. Our results provide insights into the effects on pathogen dynamics of deploying quantitative plant resistance, and can provide guidance for choosing appropriate associations of cultivars and optimizing diversification strategies. © 2013 INRA. New Phytologist © 2013 New Phytologist Trust.
Modeling approaches for qualitative and semi-quantitative analysis of cellular signaling networks.
Samaga, Regina; Klamt, Steffen
2013-06-26
A central goal of systems biology is the construction of predictive models of bio-molecular networks. Cellular networks of moderate size have been modeled successfully in a quantitative way based on differential equations. However, in large-scale networks, knowledge of mechanistic details and kinetic parameters is often too limited to allow for the set-up of predictive quantitative models.Here, we review methodologies for qualitative and semi-quantitative modeling of cellular signal transduction networks. In particular, we focus on three different but related formalisms facilitating modeling of signaling processes with different levels of detail: interaction graphs, logical/Boolean networks, and logic-based ordinary differential equations (ODEs). Albeit the simplest models possible, interaction graphs allow the identification of important network properties such as signaling paths, feedback loops, or global interdependencies. Logical or Boolean models can be derived from interaction graphs by constraining the logical combination of edges. Logical models can be used to study the basic input-output behavior of the system under investigation and to analyze its qualitative dynamic properties by discrete simulations. They also provide a suitable framework to identify proper intervention strategies enforcing or repressing certain behaviors. Finally, as a third formalism, Boolean networks can be transformed into logic-based ODEs enabling studies on essential quantitative and dynamic features of a signaling network, where time and states are continuous.We describe and illustrate key methods and applications of the different modeling formalisms and discuss their relationships. In particular, as one important aspect for model reuse, we will show how these three modeling approaches can be combined to a modeling pipeline (or model hierarchy) allowing one to start with the simplest representation of a signaling network (interaction graph), which can later be refined to logical
Energy Technology Data Exchange (ETDEWEB)
Gopinath, A.; Puhan, Sukumar; Nagarajan, G. [Internal Combustion Engineering Division, Department of Mechanical Engineering, Anna University, Chennai 600 025, Tamil Nadu (India)
2009-07-15
Biodiesel is an alternative fuel consisting of alkyl esters of fatty acids from vegetable oils or animal fats. The properties of biodiesel depend on the type of vegetable oil used for the transesterification process. The objective of the present work is to theoretically predict the iodine value and the saponification value of different biodiesels from their fatty acid methyl ester composition. The fatty acid ester compositions and the above values of different biodiesels were taken from the available published data. A multiple linear regression model was developed to predict the iodine value and saponification value of different biodiesels. The predicted results showed that the prediction errors were less than 3.4% compared to the available published data. The predicted values were also verified by substituting in the available published model which was developed to predict the higher heating values of biodiesel fuels from their iodine value and the saponification value. The resulting heating values of biodiesels were then compared with the published heating values and reported. (author)
A theoretically based model of rat personality with implications for welfare.
Directory of Open Access Journals (Sweden)
Becca Franks
Full Text Available As animal personality research becomes more central to the study of animal behavior, there is increasing need for theoretical frameworks addressing its causes and consequences. We propose that regulatory focus theory (RFT could serve as one such framework while also providing insights into how animal personality relates to welfare. RFT distinguishes between two types of approach motivation: promotion, the motivation to approach gains, and prevention, the motivation to approach or maintain safety. Decades of research have established the utility of RFT as a model of human behavior and recent evidence from zoo-housed primates and laboratory rats has suggested that it may be applicable to nonhuman animal behavior as well. Building on these initial studies, we collected data on 60 rats, Rattus norvegicus, navigating an automated maze that allowed individuals to maintain darkness (indicative of prevention/safety-approach motivation and/or activate food rewards (indicative of promotion/gain-approach motivation. As predicted, both behaviors showed stable individual differences (Ps <0.01 and were inversely associated with physiological signs of chronic stress, possibly indicating poor welfare (Ps <0.05. Subsequently, half the rats were exposed to a manageable threat (noxious novel object in the homecage. Re-testing in the maze revealed that threat exposure increased darkness time achieved (P<0.05, suggesting a mechanism by which prevention motivation may be enhanced. These results point toward the potential utility of RFT as a model for animal behavior and welfare.
Golant, Stephen M
2017-08-01
A growing global population of older adults is potential consumers of a category of products referred to as smart technologies, but also known as telehealth, telecare, information and communication technologies, robotics, and gerontechnology. This paper constructs a theoretical model to explain whether older people will adopt smart technology options to cope with their discrepant individual or environmental circumstances, thereby enabling them to age in place. Its proposed constructs and relationships are drawn from multiple academic disciplines and professional specialties, and an extensive literature focused on the factors influencing the acceptance of these smart technologies. It specifically examines whether older adults will substitute these new technologies for traditional coping solutions that rely on informal and formal care assistance and low technology related products. The model argues that older people will more positively evaluate smart technology alternatives when they feel more stressed because of their unmet needs, have greater resilience (stronger perceptions of self-efficacy and greater openness to new information), and are more strongly persuaded by their sources of outside messaging (external information) and their past experiences (internal information). It proposes that older people distinguish three attributes of these coping options when they appraise them: perceived efficaciousness, perceived usability, and perceived collateral damages. The more positively older people evaluate these attributes, the more likely that they will adopt these smart technology products. Copyright © 2017 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Bakry, A. [King Abdulaziz University, 80203, Department of Physics, Faculty of Science (Saudi Arabia); Abdulrhmann, S. [Jazan University, 114, Department of Physics, Faculty of Sciences (Saudi Arabia); Ahmed, M., E-mail: mostafa.farghal@mu.edu.eg [King Abdulaziz University, 80203, Department of Physics, Faculty of Science (Saudi Arabia)
2016-06-15
We theoretically model the dynamics of semiconductor lasers subject to the double-reflector feedback. The proposed model is a new modification of the time-delay rate equations of semiconductor lasers under the optical feedback to account for this type of the double-reflector feedback. We examine the influence of adding the second reflector to dynamical states induced by the single-reflector feedback: periodic oscillations, period doubling, and chaos. Regimes of both short and long external cavities are considered. The present analyses are done using the bifurcation diagram, temporal trajectory, phase portrait, and fast Fourier transform of the laser intensity. We show that adding the second reflector attracts the periodic and perioddoubling oscillations, and chaos induced by the first reflector to a route-to-continuous-wave operation. During this operation, the periodic-oscillation frequency increases with strengthening the optical feedback. We show that the chaos induced by the double-reflector feedback is more irregular than that induced by the single-reflector feedback. The power spectrum of this chaos state does not reflect information on the geometry of the optical system, which then has potential for use in chaotic (secure) optical data encryption.
Lee, Jung Gil
2016-12-27
Developing a high flux and selective membrane is required to make membrane distillation (MD) a more attractive desalination process. Amongst other characteristics membrane hydrophobicity is significantly important to get high vapor transport and low wettability. In this study, a laboratory fabricated carbon nanotubes (CNTs) composite electrospun (E-CNT) membrane was tested and has showed a higher permeate flux compared to poly(vinylidene fluoride-co-hexafluoropropylene) (PH) electrospun membrane (E-PH membrane) in a direct contact MD (DCMD) configuration. Only 1% and 2% of CNTs incorporation resulted in an enhanced permeate flux with lower sensitivity to feed salinity while treating a 35 and 70 g/L NaCl solutions. Experimental results and the mechanisms of E-CNT membrane were validated by a proposed new step-modeling approach. The increased vapor transport in E-CNT membranes could not be elucidated by an enhancement of mass transfer only at a given physico-chemical properties. However, the theoretical modeling approach considering the heat and mass transfers simultaneously enabled to explain successfully the enhanced flux in the DCMD process using E-CNT membranes. This indicates that both mass and heat transfers improved by CNTs are attributed to the enhanced vapor transport in the E-CNT membrane.
Dubois, Frédérique; Giraldeau, Luc-Alain; Hamilton, Ian M; Grant, James W A; Lefebvre, Louis
2004-08-01
Hawk-dove games have been extensively used to predict the conditions under which group-living animals should defend their resources against potential usurpers. Typically, game-theoretic models on aggression consider that resource defense may entail energetic and injury costs. However, intruders may also take advantage of owners who are busy fighting to sneak access to unguarded resources, imposing thereby an additional cost on the use of the escalated hawk strategy. In this article we modify the two-strategy hawk-dove game into a three-strategy hawk-dove-sneaker game that incorporates a distraction-sneaking tactic, allowing us to explore its consequences on the expected level of aggression within groups. Our model predicts a lower proportion of hawks and hence lower frequencies of aggressive interactions within groups than do previous two-strategy hawk-dove games. The extent to which distraction sneakers decrease the frequency of aggression within groups, however, depends on whether they search only for opportunities to join resources uncovered by other group members or for both unchallenged resources and opportunities to usurp.
A theoretical model of the evolution of actuarial senescence under environmental stress.
Watson, H; Cohen, A A; Isaksson, C
2015-11-01
Free-living organisms are exposed to a wide range of stressors, all of which can disrupt components of stress-related and detoxification physiology. The subsequent accumulation of somatic damage is widely believed to play a major role in the evolution of senescence. Organisms have evolved sophisticated physiological regulatory mechanisms to maintain homeostasis in response to environmental perturbations, but these systems are likely to be constrained in their ability to optimise robustness to multiple stressors due to functional correlations among related traits. While evolutionary change can accelerate due to human ecological impacts, it remains to be understood how exposure to multiple environmental stressors could affect senescence rates and subsequently population dynamics and fitness. We used a theoretical evolutionary framework to quantify the potential consequences for the evolution of actuarial senescence in response to exposure to simultaneous physiological stressors--one versus multiple and additive versus synergistic--in a hypothetical population of avian "urban adapters". In a model in which multiple stressors have additive effects on physiology, species may retain greater capacity to recover, or respond adaptively, to environmental challenges. However, in the presence of high synergy, physiological dysregulation suddenly occurs, leading to a rapid increase in age-dependent mortality and subsequent population collapse. Our results suggest that, if the synergistic model is correct, population crashes in environmentally-stressed species could happen quickly and with little warning, as physiological thresholds of stress resistance are overcome. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Villar, V. Ashley; Berger, Edo; Metzger, Brian D.; Guillochon, James
2017-11-01
The duration-luminosity phase space (DLPS) of optical transients is used, mostly heuristically, to compare various classes of transient events, to explore the origin of new transients, and to influence optical survey observing strategies. For example, several observational searches have been guided by intriguing voids and gaps in this phase space. However, we should ask, do we expect to find transients in these voids given our understanding of the various heating sources operating in astrophysical transients? In this work, we explore a broad range of theoretical models and empirical relations to generate optical light curves and to populate the DLPS. We explore transients powered by adiabatic expansion, radioactive decay, magnetar spin-down, and circumstellar interaction. For each heating source, we provide a concise summary of the basic physical processes, a physically motivated choice of model parameter ranges, an overall summary of the resulting light curves and their occupied range in the DLPS, and how the various model input parameters affect the light curves. We specifically explore the key voids discussed in the literature: the intermediate-luminosity gap between classical novae and supernovae, and short-duration transients (≲ 10 days). We find that few physical models lead to transients that occupy these voids. Moreover, we find that only relativistic expansion can produce fast and luminous transients, while for all other heating sources events with durations ≲ 10 days are dim ({M}{{R}}≳ -15 mag). Finally, we explore the detection potential of optical surveys (e.g., Large Synoptic Survey Telescope) in the DLPS and quantify the notion that short-duration and dim transients are exponentially more difficult to discover in untargeted surveys.
Scriven, P. N.; Bossuyt, P. M. M.
2010-01-01
The aim of this study was to develop and use theoretical models to investigate the accuracy of the fluorescence in situ hybridization (FISH) technique in testing a single nucleus from a preimplantation embryo without the complicating effect of mosaicism. Mathematical models were constructed for
Energy Technology Data Exchange (ETDEWEB)
Marc Vanderhaeghen
2007-04-01
The theoretical issues in the interpretation of the precision measurements of the nucleon-to-Delta transition by means of electromagnetic probes are highlighted. The results of these measurements are confronted with the state-of-the-art calculations based on chiral effective-field theories (EFT), lattice QCD, large-Nc relations, perturbative QCD, and QCD-inspired models. The link of the nucleon-to-Delta form factors to generalized parton distributions (GPDs) is also discussed.
Grigorev, V. Yu.; Grigoreva, L. D.; Salimov, I. E.
2017-08-01
Models of the quantitative structure-property relationship (QSPR) between the structure of 19 alkylammonium cations and the basal distances ( d 001) of Na+ montmorillonite modified with these cations are created. Seven descriptors characterizing intermolecular interaction, including new fractal descriptors, are used to describe the structure of the compounds. It is shown that equations obtained via multiple linear regression have good statistical characteristics, and the calculated d 001 values agree with the results from experimental studies. The quantitative contribution from hydrogen bonds to the formation of interplanar spacing in Na+ montmorillonite is found by analyzing the QSPR models.
Energy Technology Data Exchange (ETDEWEB)
Kukkonen, I.; Suppala, I. [Geological Survey of Finland, Espoo (Finland)
1999-01-01
In situ measurements of thermal conductivity and diffusivity of bedrock were investigated with the aid of a literature survey and theoretical simulations of a measurement system. According to the surveyed literature, in situ methods can be divided into `active` drill hole methods, and `passive` indirect methods utilizing other drill hole measurements together with cutting samples and petrophysical relationships. The most common active drill hole method is a cylindrical heat producing probe whose temperature is registered as a function of time. The temperature response can be calculated and interpreted with the aid of analytical solutions of the cylindrical heat conduction equation, particularly the solution for an infinite perfectly conducting cylindrical probe in a homogeneous medium, and the solution for a line source of heat in a medium. Using both forward and inverse modellings, a theoretical measurement system was analysed with an aim at finding the basic parameters for construction of a practical measurement system. The results indicate that thermal conductivity can be relatively well estimated with borehole measurements, whereas thermal diffusivity is much more sensitive to various disturbing factors, such as thermal contact resistance and variations in probe parameters. In addition, the three-dimensional conduction effects were investigated to find out the magnitude of axial `leak` of heat in long-duration experiments. The radius of influence of a drill hole measurement is mainly dependent on the duration of the experiment. Assuming typical conductivity and diffusivity values of crystalline rocks, the measurement yields information within less than a metre from the drill hole, when the experiment lasts about 24 hours. We propose the following factors to be taken as basic parameters in the construction of a practical measurement system: the probe length 1.5-2 m, heating power 5-20 Wm{sup -1}, temperature recording with 5-7 sensors placed along the probe, and
Dobbs-Dixon, Ian
numerous published papers, further work is needed to couple them self-consistently. Our theoretical studies focus on a number of objectives. We will start by incorporating our kinetic, non-equilibrium cloud model within BART, allowing us to obtain a consistent solution for cloud characteristics. We will further test simple parameterizations against the full solution to explore the reliability of simpler models. Utilizing Drift-RHD, we will explore the role of horizontal advection on cloud distribution, investigate the validity of 1D retrievals by comparing them to selfconsistently generated 3D models, and develop a retrieval framework for wavelengthdependent phase-curves. TEA will be enhanced with additional databases and the inclusion of condensates, providing realistic initial cloudy-model for retrievals. To explore the importance of equilibrium chemistry and exclude non-plausible chemical compositions (often the outcome of many retrieval approaches) we will relax the assumption of non-equilibrium chemistry by utilizing an analytical chemical equilibrium approach in BART. To address observations, our OBS suit for generating synthetic observations will be adapted to interface with our models, allowing us to both compare to existing observations and make predictions for future observations. With these tools, we are particularly well suited to understand discriminants between classes of models and identifying which particular set of observations could most readily distinguish cloud constituents and temperature features. The proposed research is directly relevant to the Planetary Science and Astrophysics goals through furthering our understanding of compositions, dynamics, energetics, and chemical behaviors of exoplanetary atmospheres. In addition, to maximize NASA's investment and encourage open access, we have and will continue to make all of our codes public and available to the community throughout the course of the research.
A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon
Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.
2017-01-01
The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.
Occupational health purchasing behaviour by SMEs--a new theoretical model.
Harrison, J; Woods, A; Dickson, K
2013-10-01
Factors influencing corporate decisions to purchase occupational health (OH) are unknown. To assist the marketing of OH services to small- and medium-sized enterprises (SMEs) by characterizing purchasing behaviour. We developed a 2×2 model, based on published studies, to describe OH purchasing behaviour by SMEs. We tested the model by analysis of responses to a cross-sectional market research survey carried out in November 2007. The companies surveyed were SMEs employing 30-250 employees, within the localities of five UK National Health Service OH services: West London, Buckinghamshire, Cambridge, Portsmouth and York. We chose a sample representative of all SMEs for each location. The survey explored knowledge of OH and the perceived importance of a variety of services. We obtained responses from 387 companies (19%); 81% indicated that they knew about OH and 24% had purchased OH services. OH was rated 'very important' by 35%, and 65% rated it as 'quite' or 'very important'. Sickness absence and its business impact were monitored by 89%. Enterprises claiming OH understanding were significantly more likely to purchase OH services (odds ratio [OR] 3.5, 95% confidence interval [CI] 1.6-8.0). Companies employing fewer than 90 employees were significantly less likely to purchase such services than larger ones (OR 0.17, 95% CI 0.09-0.3). OH knowledge and company size are key determinants of SME purchasing behaviour. Our findings support our proposed theoretical model. However, more research could explore claimed knowledge of OH with respect to the proposed purchaser types and business benefits.
Directory of Open Access Journals (Sweden)
Sibylle Pennig
2014-01-01
Full Text Available In some regions the exposure to railway noise is extremely concentrated, which may lead to high residential annoyance. Nonacoustical factors contribute to these reactions, but there is limited evidence on the interrelations between the nonacoustical factors that influence railway noise annoyance. The aims of the present study were (1 to examine exposure-response relationships between long-term railway noise exposure and annoyance in a region severely affected by railway noise and (2 to determine a priori proposed interrelations between nonacoustical factors by structural equation analysis. Residents (n = 320 living close to railway tracks in the Middle Rhine Valley completed a socio-acoustic survey. Individual noise exposure levels were calculated by an acoustical simulation model for this area. The derived exposure-response relationships indicated considerably higher annoyance at the same noise exposure level than would have been predicted by the European Union standard curve, particularly for the night-time period. In the structural equation analysis, 72% of the variance in noise annoyance was explained by the noise exposure (Lden and nonacoustical variables. The model provides insights into several causal mechanisms underlying the formation of railway noise annoyance considering indirect and reciprocal effects. The concern about harmful effects of railway noise and railway traffic, the perceived control and coping capacity, and the individual noise sensitivity were the most important factors that influence noise annoyance. All effects of the nonacoustical factors on annoyance were mediated by the perceived control and coping capacity and additionally proposed indirect effects of the theoretical model were supported by the data.
Modeling the economic impact of medication adherence in type 2 diabetes: a theoretical approach
Directory of Open Access Journals (Sweden)
David S Cobden
2010-08-01
Full Text Available David S Cobden1, Louis W Niessen2, Frans FH Rutten1, W Ken Redekop11Department of Health Policy and Management, Section of Health Economics – Medical Technology Assessment (HE-MTA, Erasmus MC, Erasmus University Rotterdam, The Netherlands; 2Department of International Health, Johns Hopkins University School of Public Health, Johns Hopkins Medical Institutions, Baltimore, MD, USAAims: While strong correlations exist between medication adherence and health economic outcomes in type 2 diabetes, current economic analyses do not adequately consider them. We propose a new approach to incorporate adherence in cost-effectiveness analysis.Methods: We describe a theoretical approach to incorporating the effect of adherence when estimating the long-term costs and effectiveness of an antidiabetic medication. This approach was applied in a Markov model which includes common diabetic health states. We compared two treatments using hypothetical patient cohorts: injectable insulin (IDM and oral (OAD medications. Two analyses were performed, one which ignored adherence (analysis 1 and one which incorporated it (analysis 2. Results from the two analyses were then compared to explore the extent to which adherence may impact incremental cost-effectiveness ratios.Results: In both analyses, IDM was more costly and more effective than OAD. When adherence was ignored, IDM generated an incremental cost-effectiveness of $12,097 per quality-adjusted life-year (QALY gained versus OAD. Incorporation of adherence resulted in a slightly higher ratio ($16,241/QALY. This increase was primarily due to better adherence with OAD than with IDM, and the higher direct medical costs for IDM.Conclusions: Incorporating medication adherence into economic analyses can meaningfully influence the estimated cost-effectiveness of type 2 diabetes treatments, and should therefore be considered in health care decision-making. Future work on the impact of adherence on health
Theoretical Models of Culture Shock and Adaptation in International Students in Higher Education
Zhou, Yuefang; Jindal-Snape, Divya; Topping, Keith; Todman, John
2008-01-01
Theoretical concepts of culture shock and adaptation are reviewed, as applied to the pedagogical adaptation of student sojourners in an unfamiliar culture. The historical development of "traditional" theories of culture shock led to the emergence of contemporary theoretical approaches, such as "culture learning", "stress and coping" and "social…
In-human subject-specific evaluation of a control-theoretic plasma volume regulation model.
Bighamian, Ramin; Kinsky, Michael; Kramer, George; Hahn, Jin-Oh
2017-12-01
The goal of this study was to conduct a subject-specific evaluation of a control-theoretic plasma volume regulation model in humans. We employed a set of clinical data collected from nine human subjects receiving fluid bolus with and without co-administration of an inotrope agent, including fluid infusion rate, plasma volume, and urine output. Once fitted to the data associated with each subject, the model accurately reproduced the fractional plasma volume change responses in all subjects: the error between actual versus model-reproduced fractional plasma volume change responses was only 1.4 ± 1.6% and 1.2 ± 0.3% of the average fractional plasma volume change responses in the absence and presence of inotrope co-administration. In addition, the model parameters determined by the subject-specific fitting assumed physiologically plausible values: (i) initial plasma volume was estimated to be 36 ± 11 mL/kg and 37 ± 10 mL/kg in the absence and presence of inotrope infusion, respectively, which was comparable to its actual counterpart of 37 ± 4 mL/kg and 43 ± 6 mL/kg; (ii) volume distribution ratio, specifying the ratio with which the inputted fluid is distributed in the intra- and extra-vascular spaces, was estimated to be 3.5 ± 2.4 and 1.9 ± 0.5 in the absence and presence of inotrope infusion, respectively, which accorded with the experimental observation that inotrope could enhance plasma volume expansion in response to fluid infusion. We concluded that the model was equipped with the ability to reproduce plasma volume response to fluid infusion in humans with physiologically plausible model parameters, and its validity may persist even under co-administration of inotropic agents. Copyright © 2017 Elsevier Ltd. All rights reserved.