Desai, Priyanka Subhash
Rheology properties are sensitive indicators of molecular structure and dynamics. The relationship between rheology and polymer dynamics is captured in the constitutive model, which, if accurate and robust, would greatly aid molecular design and polymer processing. This dissertation is thus focused on building accurate and quantitative constitutive models that can help predict linear and non-linear viscoelasticity. In this work, we have used a multi-pronged approach based on the tube theory, coarse-grained slip-link simulations, and advanced polymeric synthetic and characterization techniques, to confront some of the outstanding problems in entangled polymer rheology. First, we modified simple tube based constitutive equations in extensional rheology and developed functional forms to test the effect of Kuhn segment alignment on a) tube diameter enlargement and b) monomeric friction reduction between subchains. We, then, used these functional forms to model extensional viscosity data for polystyrene (PS) melts and solutions. We demonstrated that the idea of reduction in segmental friction due to Kuhn alignment is successful in explaining the qualitative difference between melts and solutions in extension as revealed by recent experiments on PS. Second, we compiled literature data and used it to develop a universal tube model parameter set and prescribed their values and uncertainties for 1,4-PBd by comparing linear viscoelastic G' and G" mastercurves for 1,4-PBds of various branching architectures. The high frequency transition region of the mastercurves superposed very well for all the 1,4-PBds irrespective of their molecular weight and architecture, indicating universality in high frequency behavior. Therefore, all three parameters of the tube model were extracted from this high frequency transition region alone. Third, we compared predictions of two versions of the tube model, Hierarchical model and BoB model against linear viscoelastic data of blends of 1,4-PBd
Quantitative Safety: Linking Proof-Based Verification with Model Checking for Probabilistic Systems
Ndukwu, Ukachukwu
2009-01-01
This paper presents a novel approach for augmenting proof-based verification with performance-style analysis of the kind employed in state-of-the-art model checking tools for probabilistic systems. Quantitative safety properties usually specified as probabilistic system invariants and modeled in proof-based environments are evaluated using bounded model checking techniques. Our specific contributions include the statement of a theorem that is central to model checking safety properties of proof-based systems, the establishment of a procedure; and its full implementation in a prototype system (YAGA) which readily transforms a probabilistic model specified in a proof-based environment to its equivalent verifiable PRISM model equipped with reward structures. The reward structures capture the exact interpretation of the probabilistic invariants and can reveal succinct information about the model during experimental investigations. Finally, we demonstrate the novelty of the technique on a probabilistic library cas...
Krueger, Robert F; Markon, Kristian E; Patrick, Christopher J; Benning, Stephen D; Kramer, Mark D
2007-11-01
Antisocial behavior, substance use, and impulsive and aggressive personality traits often co-occur, forming a coherent spectrum of personality and psychopathology. In the current research, the authors developed a novel quantitative model of this spectrum. Over 3 waves of iterative data collection, 1,787 adult participants selected to represent a range across the externalizing spectrum provided extensive data about specific externalizing behaviors. Statistical methods such as item response theory and semiparametric factor analysis were used to model these data. The model and assessment instrument that emerged from the research shows how externalizing phenomena are organized hierarchically and cover a wide range of individual differences. The authors discuss the utility of this model for framing research on the correlates and the etiology of externalizing phenomena.
Wang, Chao; Durney, Krista M.; Fomovsky, Gregory; Ateshian, Gerard A.; Vukelic, Sinisa
2016-03-01
The onset of osteoarthritis (OA)in articular cartilage is characterized by degradation of extracellular matrix (ECM). Specifically, breakage of cross-links between collagen fibrils in the articular cartilage leads to loss of structural integrity of the bulk tissue. Since there are no broadly accepted, non-invasive, label-free tools for diagnosing OA at its early stage, Raman spectroscopyis therefore proposed in this work as a novel, non-destructive diagnostic tool. In this study, collagen thin films were employed to act as a simplified model system of the cartilage collagen extracellular matrix. Cross-link formation was controlled via exposure to glutaraldehyde (GA), by varying exposure time and concentration levels, and Raman spectral information was collected to quantitatively characterize the cross-link assignments imparted to the collagen thin films during treatment. A novel, quantitative method was developed to analyze the Raman signal obtained from collagen thin films. Segments of Raman signal were decomposed and modeled as the sum of individual bands, providing an optimization function for subsequent curve fitting against experimental findings. Relative changes in the concentration of the GA-induced pyridinium cross-links were extracted from the model, as a function of the exposure to GA. Spatially resolved characterization enabled construction of spectral maps of the collagen thin films, which provided detailed information about the variation of cross-link formation at various locations on the specimen. Results showed that Raman spectral data correlate with glutaraldehyde treatment and therefore may be used as a proxy by which to measure loss of collagen cross-links in vivo. This study proposes a promising system of identifying onset of OA and may enable early intervention treatments that may serve to slow or prevent osteoarthritis progression.
Directory of Open Access Journals (Sweden)
Andreas N. Prokopiou
2015-01-01
Full Text Available This paper presents a computational model which estimates the postsynaptic conductance change of mammalian Type I afferent peripheral process when airborne acoustic waves impact on the tympanic membrane. A model of the human auditory periphery is used to estimate the inner hair cell potential change in response to airborne sound. A generic and tunable topology of the mammalian synaptic ribbon is generated and the voltage dependence of its substructures is used to calculate discrete and probabilistic neurotransmitter vesicle release. Results suggest an almost linear relationship between increasing sound level (in dB SPL and the postsynaptic conductance for frequencies considered too high for neurons to phase lock with (i.e., a few kHz. Furthermore coordinated vesicle release is shown for up to 300–400 Hz and a mechanism of phase shifting the subharmonic content of a stimulating signal is suggested. Model outputs suggest that strong onset response and highly synchronised multivesicular release rely on compound fusion of ribbon tethered vesicles.
Quantitative linking hypotheses for infant eye movements.
Directory of Open Access Journals (Sweden)
Daniel Yurovsky
Full Text Available The study of cognitive development hinges, largely, on the analysis of infant looking. But analyses of eye gaze data require the adoption of linking hypotheses: assumptions about the relationship between observed eye movements and underlying cognitive processes. We develop a general framework for constructing, testing, and comparing these hypotheses, and thus for producing new insights into early cognitive development. We first introduce the general framework--applicable to any infant gaze experiment--and then demonstrate its utility by analyzing data from a set of experiments investigating the role of attentional cues in infant learning. The new analysis uncovers significantly more structure in these data, finding evidence of learning that was not found in standard analyses and showing an unexpected relationship between cue use and learning rate. Finally, we discuss general implications for the construction and testing of quantitative linking hypotheses. MATLAB code for sample linking hypotheses can be found on the first author's website.
Response model parameter linking
Barrett, Michelle Derbenwick
2015-01-01
With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of equating observed scores on different test forms. This thesis argues, however, that the use of item response models does not require
Building a Database for a Quantitative Model
Kahn, C. Joseph; Kleinhammer, Roger
2014-01-01
A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.
Compositional and Quantitative Model Checking
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand
2010-01-01
This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...
Quantitative Link Between Biological Evolution and Statistical Mechanics
Ray, Tane S.
A model of evolution called the modified Wright-Fisher model (MWF) is introduced. It is shown to exhibit a second order phase transition, and a quantitative mapping is established between the mean field Ising model and itself. An equation of state and scaling function are derived for the MWF from the steady state solution of the governing quasispecies equations. The critical exponents are identical to those of the mean-field Ising model. Simulation data for the MWF on a two-dimensional square lattice show good evidence for a critical point. The susceptibility exponent is estimated and is found, within the uncertainty of the simulation data, to be equal to that of the two-dimensional Ising model, suggesting that the two models are in the same universality class.
Decision support intended to improve ecosystem sustainability requires that we link stakeholder priorities directly to quantitative tools and measures of desired outcomes. Actions taken at the community level can have large impacts on production and delivery of ecosystem service...
Gene Essentiality Is a Quantitative Property Linked to Cellular Evolvability.
Liu, Gaowen; Yong, Mei Yun Jacy; Yurieva, Marina; Srinivasan, Kandhadayar Gopalan; Liu, Jaron; Lim, John Soon Yew; Poidinger, Michael; Wright, Graham Daniel; Zolezzi, Francesca; Choi, Hyungwon; Pavelka, Norman; Rancati, Giulia
2015-12-03
Gene essentiality is typically determined by assessing the viability of the corresponding mutant cells, but this definition fails to account for the ability of cells to adaptively evolve to genetic perturbations. Here, we performed a stringent screen to assess the degree to which Saccharomyces cerevisiae cells can survive the deletion of ~1,000 individual "essential" genes and found that ~9% of these genetic perturbations could in fact be overcome by adaptive evolution. Our analyses uncovered a genome-wide gradient of gene essentiality, with certain essential cellular functions being more "evolvable" than others. Ploidy changes were prevalent among the evolved mutant strains, and aneuploidy of a specific chromosome was adaptive for a class of evolvable nucleoporin mutants. These data justify a quantitative redefinition of gene essentiality that incorporates both viability and evolvability of the corresponding mutant cells and will enable selection of therapeutic targets associated with lower risk of emergence of drug resistance. Copyright © 2015 Elsevier Inc. All rights reserved.
The mathematics of cancer: integrating quantitative models.
Altrock, Philipp M; Liu, Lin L; Michor, Franziska
2015-12-01
Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology.
Reconceptualizing the Linked Courses Model
Baxter, Mary
2008-01-01
To help students meet the demands of society, the University of Houston is using the framework of learning communities and constructivism to create a cross-disciplinary approach to teaching to provide media-rich thematically linked courses to engage a diverse student population. A case study investigated three semesters of thematically linked…
A General Method for Targeted Quantitative Cross-Linking Mass Spectrometry
Chavez, Juan D.; Eng, Jimmy K.; Schweppe, Devin K.; Cilia, Michelle; Rivera, Keith; Zhong, Xuefei; Wu, Xia; Allen, Terrence; Khurgel, Moshe; Kumar, Akhilesh; Lampropoulos, Athanasios; Larsson, Mårten; Maity, Shuvadeep; Morozov, Yaroslav; Pathmasiri, Wimal; Perez-Neut, Mathew; Pineyro-Ruiz, Coriness; Polina, Elizabeth; Post, Stephanie; Rider, Mark; Tokmina-Roszyk, Dorota; Tyson, Katherine; Vieira Parrine Sant'Ana, Debora; Bruce, James E.
2016-01-01
Chemical cross-linking mass spectrometry (XL-MS) provides protein structural information by identifying covalently linked proximal amino acid residues on protein surfaces. The information gained by this technique is complementary to other structural biology methods such as x-ray crystallography, NMR and cryo-electron microscopy[1]. The extension of traditional quantitative proteomics methods with chemical cross-linking can provide information on the structural dynamics of protein structures and protein complexes. The identification and quantitation of cross-linked peptides remains challenging for the general community, requiring specialized expertise ultimately limiting more widespread adoption of the technique. We describe a general method for targeted quantitative mass spectrometric analysis of cross-linked peptide pairs. We report the adaptation of the widely used, open source software package Skyline, for the analysis of quantitative XL-MS data as a means for data analysis and sharing of methods. We demonstrate the utility and robustness of the method with a cross-laboratory study and present data that is supported by and validates previously published data on quantified cross-linked peptide pairs. This advance provides an easy to use resource so that any lab with access to a LC-MS system capable of performing targeted quantitative analysis can quickly and accurately measure dynamic changes in protein structure and protein interactions. PMID:27997545
Quantitative structure - mesothelioma potency model ...
Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar
Compositional and Quantitative Model Checking
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand
2010-01-01
on the existence of a quotient construction, allowing a property phi of a parallel system phi/A to be transformed into a sufficient and necessary quotient-property yolA to be satisfied by the component 13. Given a model checking problem involving a network Pi I and a property yo, the method gradually move (by...
Link mining models, algorithms, and applications
Yu, Philip S; Faloutsos, Christos
2010-01-01
This book presents in-depth surveys and systematic discussions on models, algorithms and applications for link mining. Link mining is an important field of data mining. Traditional data mining focuses on 'flat' data in which each data object is represented as a fixed-length attribute vector. However, many real-world data sets are much richer in structure, involving objects of multiple types that are related to each other. Hence, recently link mining has become an emerging field of data mining, which has a high impact in various important applications such as text mining, social network analysi
Quantitative Modeling of Earth Surface Processes
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes. More details...
Quantitative system validation in model driven design
DEFF Research Database (Denmark)
Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois;
2010-01-01
The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made, focus...
Recent trends in social systems quantitative theories and quantitative models
Hošková-Mayerová, Šárka; Soitu, Daniela-Tatiana; Kacprzyk, Janusz
2017-01-01
The papers collected in this volume focus on new perspectives on individuals, society, and science, specifically in the field of socio-economic systems. The book is the result of a scientific collaboration among experts from “Alexandru Ioan Cuza” University of Iaşi (Romania), “G. d’Annunzio” University of Chieti-Pescara (Italy), "University of Defence" of Brno (Czech Republic), and "Pablo de Olavide" University of Sevilla (Spain). The heterogeneity of the contributions presented in this volume reflects the variety and complexity of social phenomena. The book is divided in four Sections as follows. The first Section deals with recent trends in social decisions. Specifically, it aims to understand which are the driving forces of social decisions. The second Section focuses on the social and public sphere. Indeed, it is oriented on recent developments in social systems and control. Trends in quantitative theories and models are described in Section 3, where many new formal, mathematical-statistical to...
Pastorek, Jaroslav; Fencl, Martin; Stránský, David; Rieckermann, Jörg; Bareš, Vojtěch
2017-04-01
Reliable and representative rainfall data are crucial for urban runoff modelling. However, traditional precipitation measurement devices often fail to provide sufficient information about the spatial variability of rainfall, especially when heavy storm events (determining design of urban stormwater systems) are considered. Commercial microwave links (CMLs), typically very dense in urban areas, allow for indirect precipitation detection with desired spatial and temporal resolution. Fencl et al. (2016) recognised the high bias in quantitative precipitation estimates (QPEs) from CMLs which significantly limits their usability and, in order to reduce the bias, suggested a novel method for adjusting the QPEs to existing rain gauge networks. Studies evaluating the potential of CMLs for rainfall detection so far focused primarily on direct comparison of the QPEs from CMLs to ground observations. In contrast, this investigation evaluates the suitability of these innovative rainfall data for stormwater runoff modelling on a case study of a small ungauged (in long-term perspective) urban catchment in Prague-Letňany, Czech Republic (Fencl et al., 2016). We compare the runoff measured at the outlet from the catchment with the outputs of a rainfall-runoff model operated using (i) CML data adjusted by distant rain gauges, (ii) rainfall data from the distant gauges alone and (iii) data from a single temporary rain gauge located directly in the catchment, as it is common practice in drainage engineering. Uncertainties of the simulated runoff are analysed using the Bayesian method for uncertainty evaluation incorporating a statistical bias description as formulated by Del Giudice et al. (2013). Our results show that adjusted CML data are able to yield reliable runoff modelling results, primarily for rainfall events with convective character. Performance statistics, most significantly the timing of maximal discharge, reach better (less uncertain) values with the adjusted CML data
Bailey, Ajay; Hutter, Inge
2008-10-01
With 3.1 million people estimated to be living with HIV/AIDS in India and 39.5 million people globally, the epidemic has posed academics the challenge of identifying behaviours and their underlying beliefs in the effort to reduce the risk of HIV transmission. The Health Belief Model (HBM) is frequently used to identify risk behaviours and adherence behaviour in the field of HIV/AIDS. Risk behaviour studies that apply HBM have been largely quantitative and use of qualitative methodology is rare. The marriage of qualitative and quantitative methods has never been easy. The challenge is in triangulating the methods. Method triangulation has been largely used to combine insights from the qualitative and quantitative methods but not to link both the methods. In this paper we suggest a linked trajectory of method triangulation (LTMT). The linked trajectory aims to first gather individual level information through in-depth interviews and then to present the information as vignettes in focus group discussions. We thus validate information obtained from in-depth interviews and gather emic concepts that arise from the interaction. We thus capture both the interpretation and the interaction angles of the qualitative method. Further, using the qualitative information gained, a survey is designed. In doing so, the survey questions are grounded and contextualized. We employed this linked trajectory of method triangulation in a study on the risk assessment of HIV/AIDS among migrant and mobile men. Fieldwork was carried out in Goa, India. Data come from two waves of studies, first an explorative qualitative study (2003), second a larger study (2004-2005), including in-depth interviews (25), focus group discussions (21) and a survey (n=1259). By employing the qualitative to quantitative LTMT we can not only contextualize the existing concepts of the HBM, but also validate new concepts and identify new risk groups.
Quantitative model validation techniques: new insights
Ling, You
2012-01-01
This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...
Quantitative versus qualitative modeling: a complementary approach in ecosystem study.
Bondavalli, C; Favilla, S; Bodini, A
2009-02-01
Natural disturbance or human perturbation act upon ecosystems by changing some dynamical parameters of one or more species. Foreseeing these modifications is necessary before embarking on an intervention: predictions may help to assess management options and define hypothesis for interventions. Models become valuable tools for studying and making predictions only when they capture types of interactions and their magnitude. Quantitative models are more precise and specific about a system, but require a large effort in model construction. Because of this very often ecological systems remain only partially specified and one possible approach to their description and analysis comes from qualitative modelling. Qualitative models yield predictions as directions of change in species abundance but in complex systems these predictions are often ambiguous, being the result of opposite actions exerted on the same species by way of multiple pathways of interactions. Again, to avoid such ambiguities one needs to know the intensity of all links in the system. One way to make link magnitude explicit in a way that can be used in qualitative analysis is described in this paper and takes advantage of another type of ecosystem representation: ecological flow networks. These flow diagrams contain the structure, the relative position and the connections between the components of a system, and the quantity of matter flowing along every connection. In this paper it is shown how these ecological flow networks can be used to produce a quantitative model similar to the qualitative counterpart. Analyzed through the apparatus of loop analysis this quantitative model yields predictions that are by no means ambiguous, solving in an elegant way the basic problem of qualitative analysis. The approach adopted in this work is still preliminary and we must be careful in its application.
Quantitative Analysis of Polarimetric Model-Based Decomposition Methods
Directory of Open Access Journals (Sweden)
Qinghua Xie
2016-11-01
Full Text Available In this paper, we analyze the robustness of the parameter inversion provided by general polarimetric model-based decomposition methods from the perspective of a quantitative application. The general model and algorithm we have studied is the method proposed recently by Chen et al., which makes use of the complete polarimetric information and outperforms traditional decomposition methods in terms of feature extraction from land covers. Nevertheless, a quantitative analysis on the retrieved parameters from that approach suggests that further investigations are required in order to fully confirm the links between a physically-based model (i.e., approaches derived from the Freeman–Durden concept and its outputs as intermediate products before any biophysical parameter retrieval is addressed. To this aim, we propose some modifications on the optimization algorithm employed for model inversion, including redefined boundary conditions, transformation of variables, and a different strategy for values initialization. A number of Monte Carlo simulation tests for typical scenarios are carried out and show that the parameter estimation accuracy of the proposed method is significantly increased with respect to the original implementation. Fully polarimetric airborne datasets at L-band acquired by German Aerospace Center’s (DLR’s experimental synthetic aperture radar (E-SAR system were also used for testing purposes. The results show different qualitative descriptions of the same cover from six different model-based methods. According to the Bragg coefficient ratio (i.e., β , they are prone to provide wrong numerical inversion results, which could prevent any subsequent quantitative characterization of specific areas in the scene. Besides the particular improvements proposed over an existing polarimetric inversion method, this paper is aimed at pointing out the necessity of checking quantitatively the accuracy of model-based PolSAR techniques for a
Quantitative Models and Analysis for Reactive Systems
DEFF Research Database (Denmark)
Thrane, Claus
phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided...... by the environment in which they are embedded. This thesis studies the semantics and properties of a model-based framework for re- active systems, in which models and specifications are assumed to contain quantifiable information, such as references to time or energy. Our goal is to develop a theory of approximation......, by studying how small changes to our models affect the verification results. A key source of motivation for this work can be found in The Embedded Systems Design Challenge [HS06] posed by Thomas A. Henzinger and Joseph Sifakis. It contains a call for advances in the state-of-the-art of systems verification...
Link-based quantitative methods to identify differentially coexpressed genes and gene Pairs
Directory of Open Access Journals (Sweden)
Ye Zhi-Qiang
2011-08-01
Full Text Available Abstract Background Differential coexpression analysis (DCEA is increasingly used for investigating the global transcriptional mechanisms underlying phenotypic changes. Current DCEA methods mostly adopt a gene connectivity-based strategy to estimate differential coexpression, which is characterized by comparing the numbers of gene neighbors in different coexpression networks. Although it simplifies the calculation, this strategy mixes up the identities of different coexpression neighbors of a gene, and fails to differentiate significant differential coexpression changes from those trivial ones. Especially, the correlation-reversal is easily missed although it probably indicates remarkable biological significance. Results We developed two link-based quantitative methods, DCp and DCe, to identify differentially coexpressed genes and gene pairs (links. Bearing the uniqueness of exploiting the quantitative coexpression change of each gene pair in the coexpression networks, both methods proved to be superior to currently popular methods in simulation studies. Re-mining of a publicly available type 2 diabetes (T2D expression dataset from the perspective of differential coexpression analysis led to additional discoveries than those from differential expression analysis. Conclusions This work pointed out the critical weakness of current popular DCEA methods, and proposed two link-based DCEA algorithms that will make contribution to the development of DCEA and help extend it to a broader spectrum.
Quantitative Models and Analysis for Reactive Systems
DEFF Research Database (Denmark)
Thrane, Claus
phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided......, allowing verification procedures to quantify judgements, on how suitable a model is for a given specification — hence mitigating the usual harsh distinction between satisfactory and non-satisfactory system designs. This information, among other things, allows us to evaluate the robustness of our framework......, by studying how small changes to our models affect the verification results. A key source of motivation for this work can be found in The Embedded Systems Design Challenge [HS06] posed by Thomas A. Henzinger and Joseph Sifakis. It contains a call for advances in the state-of-the-art of systems verification...
Quantitative Visualization of ChIP-chip Data by Using Linked Views
Energy Technology Data Exchange (ETDEWEB)
Huang, Min-Yu; Weber, Gunther; Li, Xiao-Yong; Biggin, Mark; Hamann, Bernd
2010-11-05
Most analyses of ChIP-chip in vivo DNA binding have focused on qualitative descriptions of whether genomic regions are bound or not. There is increasing evidence, however, that factors bind in a highly overlapping manner to the same genomic regions and that it is quantitative differences in occupancy on these commonly bound regions that are the critical determinants of the different biological specificity of factors. As a result, it is critical to have a tool to facilitate the quantitative visualization of differences between transcription factors and the genomic regions they bind to understand each factor's unique roles in the network. We have developed a framework which combines several visualizations via brushing-and-linking to allow the user to interactively analyze and explore in vivo DNA binding data of multiple transcription factors. We describe these visualization types and also provide a discussion of biological examples in this paper.
Quantitative bioluminescence imaging of mouse tumor models.
Tseng, Jen-Chieh; Kung, Andrew L
2015-01-05
Bioluminescence imaging (BLI) has become an essential technique for preclinical evaluation of anticancer therapeutics and provides sensitive and quantitative measurements of tumor burden in experimental cancer models. For light generation, a vector encoding firefly luciferase is introduced into human cancer cells that are grown as tumor xenografts in immunocompromised hosts, and the enzyme substrate luciferin is injected into the host. Alternatively, the reporter gene can be expressed in genetically engineered mouse models to determine the onset and progression of disease. In addition to expression of an ectopic luciferase enzyme, bioluminescence requires oxygen and ATP, thus only viable luciferase-expressing cells or tissues are capable of producing bioluminescence signals. Here, we summarize a BLI protocol that takes advantage of advances in hardware, especially the cooled charge-coupled device camera, to enable detection of bioluminescence in living animals with high sensitivity and a large dynamic range.
Quantitative assessment model for gastric cancer screening
Institute of Scientific and Technical Information of China (English)
Kun Chen; Wei-Ping Yu; Liang Song; Yi-Min Zhu
2005-01-01
AIM: To set up a mathematic model for gastric cancer screening and to evaluate its function in mass screening for gastric cancer.METHODS: A case control study was carried on in 66patients and 198 normal people, then the risk and protective factors of gastric cancer were determined, including heavy manual work, foods such as small yellow-fin tuna, dried small shrimps, squills, crabs, mothers suffering from gastric diseases, spouse alive, use of refrigerators and hot food,etc. According to some principles and methods of probability and fuzzy mathematics, a quantitative assessment model was established as follows: first, we selected some factors significant in statistics, and calculated weight coefficient for each one by two different methods; second, population space was divided into gastric cancer fuzzy subset and non gastric cancer fuzzy subset, then a mathematic model for each subset was established, we got a mathematic expression of attribute degree (AD).RESULTS: Based on the data of 63 patients and 693 normal people, AD of each subject was calculated. Considering the sensitivity and specificity, the thresholds of AD values calculated were configured with 0.20 and 0.17, respectively.According to these thresholds, the sensitivity and specificity of the quantitative model were about 69% and 63%.Moreover, statistical test showed that the identification outcomes of these two different calculation methods were identical (P＞0.05).CONCLUSION: The validity of this method is satisfactory.It is convenient, feasible, economic and can be used to determine individual and population risks of gastric cancer.
Global Quantitative Modeling of Chromatin Factor Interactions
Zhou, Jian; Troyanskaya, Olga G.
2014-01-01
Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896
Quantitative Modeling of Landscape Evolution, Treatise on Geomorphology
Temme, A.J.A.M.; Schoorl, J.M.; Claessens, L.F.G.; Veldkamp, A.; Shroder, F.S.
2013-01-01
This chapter reviews quantitative modeling of landscape evolution – which means that not just model studies but also modeling concepts are discussed. Quantitative modeling is contrasted with conceptual or physical modeling, and four categories of model studies are presented. Procedural studies focus
Matsuda, H; Iwaisaki, H
2002-01-01
In the prediction of genetic values and quantitative trait loci (QTLs) mapping via the mixed model method incorporating marker information in animal populations, it is important to model the genetic variance for individuals with an arbitrary pedigree structure. In this study, for a crossed population originated from different genetic groups such as breeds or outbred strains, the variance of additive genetic values for multiple linked QTLs that are contained in a chromosome segment, especially the segregation variance, is investigated assuming the use of marker data. The variance for a finite number of QTLs in one chromosomal segment is first examined for the crossed population with the general pedigree. Then, applying the concept of the expectation of identity-by-descent proportion, an approximation to the mean of the conditional probabilities for the linked QTLs over all loci is obtained, and using it an expression for the variance in the case of an infinite number of linked QTLs marked by flanking markers is derived. It appears that the approach presented can be useful in the segment mapping using, and in the genetic evaluation of, crosses with general pedigrees in the population of concern. The calculation of the segregation variance through the current approach is illustrated numerically, using a small data-set.
The quantitative modelling of human spatial habitability
Wise, J. A.
1985-01-01
A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).
Directory of Open Access Journals (Sweden)
Shetty Vivekananda
2012-08-01
Full Text Available Abstract Background In approximately 80% of patients, ovarian cancer is diagnosed when the patient is already in the advanced stages of the disease. CA125 is currently used as the marker for ovarian cancer; however, it lacks specificity and sensitivity for detecting early stage disease. There is a critical unmet need for sensitive and specific routine screening tests for early diagnosis that can reduce ovarian cancer lethality by reliably detecting the disease at its earliest and treatable stages. Results In this study, we investigated the N-linked sialylated glycopeptides in serum samples from healthy and ovarian cancer patients using Lectin-directed Tandem Labeling (LTL and iTRAQ quantitative proteomics methods. We identified 45 N-linked sialylated glycopeptides containing 46 glycosylation sites. Among those, ten sialylated glycopeptides were significantly up-regulated in ovarian cancer patients’ serum samples. LC-MS/MS analysis of the non-glycosylated peptides from the same samples, western blot data using lectin enriched glycoproteins of various ovarian cancer type samples, and PNGase F (+/− treatment confirmed the sialylation changes in the ovarian cancer samples. Conclusion Herein, we demonstrated that several proteins are aberrantly sialylated in N-linked glycopeptides in ovarian cancer and detection of glycopeptides with abnormal sialylation changes may have the potential to serve as biomarkers for ovarian cancer.
Klump, J. F.; Huber, R.; Robertson, J.; Cox, S. J. D.; Woodcock, R.
2014-12-01
Despite the recent explosion of quantitative geological data, geology remains a fundamentally qualitative science. Numerical data only constitute a certain part of data collection in the geosciences. In many cases, geological observations are compiled as text into reports and annotations on drill cores, thin sections or drawings of outcrops. The observations are classified into concepts such as lithology, stratigraphy, geological structure, etc. These descriptions are semantically rich and are generally supported by more quantitative observations using geochemical analyses, XRD, hyperspectral scanning, etc, but the goal is geological semantics. In practice it has been difficult to bring the different observations together due to differing perception or granularity of classification in human observation, or the partial observation of only some characteristics using quantitative sensors. In the past years many geological classification schemas have been transferred into ontologies and vocabularies, formalized using RDF and OWL, and published through SPARQL endpoints. Several lithological ontologies were compiled by stratigraphy.net and published through a SPARQL endpoint. This work is complemented by the development of a Python API to integrate this vocabulary into Python-based text mining applications. The applications for the lithological vocabulary and Python API are automated semantic tagging of geochemical data and descriptions of drill cores, machine learning of geochemical compositions that are diagnostic for lithological classifications, and text mining for lithological concepts in reports and geological literature. This combination of applications can be used to identify anomalies in databases, where composition and lithological classification do not match. It can also be used to identify lithological concepts in the literature and infer quantitative values. The resulting semantic tagging opens new possibilities for linking these diverse sources of data.
A Bayesian model for estimating population means using a link-tracing sampling design.
St Clair, Katherine; O'Connell, Daniel
2012-03-01
Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied.
Linking advanced fracture models to structural analysis
Energy Technology Data Exchange (ETDEWEB)
Chiesa, Matteo
2001-07-01
Shell structures with defects occur in many situations. The defects are usually introduced during the welding process necessary for joining different parts of the structure. Higher utilization of structural materials leads to a need for accurate numerical tools for reliable prediction of structural response. The direct discretization of the cracked shell structure with solid finite elements in order to perform an integrity assessment of the structure in question leads to large size problems, and makes such analysis infeasible in structural application. In this study a link between local material models and structural analysis is outlined. An ''ad hoc'' element formulation is used in order to connect complex material models to the finite element framework used for structural analysis. An improved elasto-plastic line spring finite element formulation, used in order to take cracks into account, is linked to shell elements which are further linked to beam elements. In this way one obtain a global model of the shell structure that also accounts for local flexibilities and fractures due to defects. An important advantage with such an approach is a direct fracture mechanics assessment e.g. via computed J-integral or CTOD. A recent development in this approach is the notion of two-parameter fracture assessment. This means that the crack tip stress tri-axiality (constraint) is employed in determining the corresponding fracture toughness, giving a much more realistic capacity of cracked structures. The present thesis is organized in six research articles and an introductory chapter that reviews important background literature related to this work. Paper I and II address the performance of shell and line spring finite elements as a cost effective tool for performing the numerical calculation needed to perform a fracture assessment. In Paper II a failure assessment, based on the testing of a constraint-corrected fracture mechanics specimen under tension, is
Linking agent-based models and stochastic models of financial markets.
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene
2012-05-29
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.
Toward quantitative modeling of silicon phononic thermocrystals
Energy Technology Data Exchange (ETDEWEB)
Lacatena, V. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France); IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Haras, M.; Robillard, J.-F., E-mail: jean-francois.robillard@isen.iemn.univ-lille1.fr; Dubois, E. [IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Monfray, S.; Skotnicki, T. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France)
2015-03-16
The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.
Rovatsos, Michail; Altmanová, Marie; Pokorná, Martina Johnson; Kratochvíl, Lukáš
2014-08-28
The green anole, Anolis carolinensis (ACA), is the model reptile for a vast array of biological disciplines. It was the first nonavian reptile to have its genome fully sequenced. During the genome project, the XX/XY system of sex chromosomes homologous to chicken chromosome 15 (GGA15) was revealed, and 106 X-linked genes were identified. We selected 38 genes located on eight scaffolds in ACA and having orthologs located on GGA15, then tested their linkage to ACA X chromosome by using comparative quantitative fluorescent real-time polymerase chain reaction applied to male and female genomic DNA. All tested genes appeared to be X-specific and not present on the Y chromosome. Assuming that all genes located on these scaffolds should be localized to the ACA X chromosome, we more than doubled the number of known X-linked genes in ACA, from 106 to 250. While demonstrating that the gene content of chromosome X in ACA and GGA15 is largely conserved, we nevertheless showed that numerous interchromosomal rearrangements had occurred since the splitting of the chicken and anole evolutionary lineages. The presence of many ACA X-specific genes localized to distinct contigs indicates that the ACA Y chromosome should be highly degenerated, having lost a large amount of its original gene content during evolution. The identification of novel genes linked to the X chromosome and absent on the Y chromosome in the model lizard species contributes to ongoing research as to the evolution of sex determination in reptiles and provides important information for future comparative and functional genomics.
Modeling conflict : research methods, quantitative modeling, and lessons learned.
Energy Technology Data Exchange (ETDEWEB)
Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.
2004-09-01
This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.
High male chimerism in the female breast shows quantitative links with cancer.
Dhimolea, Eugen; Denes, Viktoria; Lakk, Monika; Al-Bazzaz, Sana; Aziz-Zaman, Sonya; Pilichowska, Monika; Geck, Peter
2013-08-15
Clinical observations suggest that pregnancy provides protection against cancer. The mechanisms involved, however, remain unclear. Fetal cells are known to enter the mother's circulation during pregnancy and establish microchimerism. We investigated if pregnancy-related embryonic/fetal stem cell integration plays a role in breast cancer. A high-sensitivity Y-chromosome assay was developed to trace male allogeneic cells (from male fetus) in females. Fixed-embedded samples (n = 206) from both normal and breast cancer patients were screened for microchimerism. The results were combined with matching clinicopathological and histological parameters and processed statistically. The results show that in our samples (182 informative) more than half of healthy women (56%) carried male cells in their breast tissue for decades (n = 68), while only one out of five in the cancer sample pool (21%) (n = 114) (odds ratio = 4.75, CI at 95% 2.34-9.69; p = 0.0001). The data support the notion that a biological link may exist between chimerism and tissue-integrity. The correlation, however, is non-linear, since male microchimerism in excess ("hyperchimerism") is also involved in cancer. The data suggest a link between hyperchimerism and HER2-type cancers, while decreased chimerism ("hypochimerism") associates with ER/PR-positive (luminal-type) breast cancers. Chimerism levels that correlate with protection appear to be non-random and share densities with the mammary progenitor components of the stem cell lineage in the breast. The results suggest that protection may involve stem/progenitor level interactions and implicate novel quantitative mechanisms in chimerism biology. Copyright © 2013 UICC.
Qualitative vs. quantitative software process simulation modelling: conversion and comparison
Zhang, He; Kitchenham, Barbara; Jeffery, Ross
2009-01-01
peer-reviewed Software Process Simulation Modeling (SPSM) research has increased in the past two decades. However, most of these models are quantitative, which require detailed understanding and accurate measurement. As the continuous work to our previous studies in qualitative modeling of software process, this paper aims to investigate the structure equivalence and model conversion between quantitative and qualitative process modeling, and to compare the characteristics and performance o...
Quantitative modelling of the biomechanics of the avian syrinx
Elemans, C.P.H.; Larsen, O.N.; Hoffmann, M.R.; Leeuwen, van J.L.
2003-01-01
We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts
Quantitative modelling of the biomechanics of the avian syrinx
DEFF Research Database (Denmark)
Elemans, Coen P. H.; Larsen, Ole Næsbye; Hoffmann, Marc R.
2003-01-01
We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts...
Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization
Directory of Open Access Journals (Sweden)
Xuefeng Yan
2013-01-01
Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.
Cross-link guided molecular modeling with ROSETTA.
Directory of Open Access Journals (Sweden)
Abdullah Kahraman
Full Text Available Chemical cross-links identified by mass spectrometry generate distance restraints that reveal low-resolution structural information on proteins and protein complexes. The technology to reliably generate such data has become mature and robust enough to shift the focus to the question of how these distance restraints can be best integrated into molecular modeling calculations. Here, we introduce three workflows for incorporating distance restraints generated by chemical cross-linking and mass spectrometry into ROSETTA protocols for comparative and de novo modeling and protein-protein docking. We demonstrate that the cross-link validation and visualization software Xwalk facilitates successful cross-link data integration. Besides the protocols we introduce XLdb, a database of chemical cross-links from 14 different publications with 506 intra-protein and 62 inter-protein cross-links, where each cross-link can be mapped on an experimental structure from the Protein Data Bank. Finally, we demonstrate on a protein-protein docking reference data set the impact of virtual cross-links on protein docking calculations and show that an inter-protein cross-link can reduce on average the RMSD of a docking prediction by 5.0 Å. The methods and results presented here provide guidelines for the effective integration of chemical cross-link data in molecular modeling calculations and should advance the structural analysis of particularly large and transient protein complexes via hybrid structural biology methods.
Loopholes and missing links in protein modeling.
Rossi, Karen A; Weigelt, Carolyn A; Nayeem, Akbar; Krystek, Stanley R
2007-09-01
This paper provides an unbiased comparison of four commercially available programs for loop sampling, Prime, Modeler, ICM, and Sybyl, each of which uses a different modeling protocol. The study assesses the quality of results and examines the relative strengths and weaknesses of each method. The set of loops to be modeled varied in length from 4-12 amino acids. The approaches used for loop modeling can be classified into two methodologies: ab initio loop generation (Modeler and Prime) and database searches (Sybyl and ICM). Comparison of the modeled loops to the native structures was used to determine the accuracy of each method. All of the protocols returned similar results for short loop lengths (four to six residues), but as loop length increased, the quality of the results varied among the programs. Prime generated loops with RMSDs modeled loops.
Quantitative models for sustainable supply chain management
DEFF Research Database (Denmark)
Brandenburg, M.; Govindan, Kannan; Sarkis, J.
2014-01-01
Sustainability, the consideration of environmental factors and social aspects, in supply chain management (SCM) has become a highly relevant topic for researchers and practitioners. The application of operations research methods and related models, i.e. formal modeling, for closed-loop SCM...... and reverse logistics has been effectively reviewed in previously published research. This situation is in contrast to the understanding and review of mathematical models that focus on environmental or social factors in forward supply chains (SC), which has seen less investigation. To evaluate developments...
Modeling HVDC links in composite reliability evaluation: issues and solutions
Energy Technology Data Exchange (ETDEWEB)
Reis, Lineu B. de [Sao Paulo Univ., SP (Brazil). Escola Politecnica; Ramos, Dorel S. [Centrais Eletricas de Sao Paulo, SP (Brazil); Morozowski Filho, Marciano [Santa Catarina Univ., Florianopolis, SC (Brazil)
1992-12-31
This paper deals with theoretical and practical aspects of HVDC link modeling for composite (generation and transmission) system reliability evaluation purposes. The conceptual framework used in the analysis, as well as the practical aspects, are illustrated through an application example. Initially, two distinct HVDC link operation models are described: synchronous and asynchronous. An analysis of the most significant internal failure modes and their effects on HVDC link transmission capability is presented and a reliability model is proposed. Finally, a historical performance data of the Itaipu HVDC system is shown. 6 refs., 5 figs., 8 tabs.
Directory of Open Access Journals (Sweden)
Brunel Jean-Claude
2007-07-01
Full Text Available Abstract In this study, the potential association of PrP genotypes with health and productive traits was investigated. Data were recorded on animals of the INRA 401 breed from the Bourges-La Sapinière INRA experimental farm. The population consisted of 30 rams and 852 ewes, which produced 1310 lambs. The animals were categorized into three PrP genotype classes: ARR homozygous, ARR heterozygous, and animals without any ARR allele. Two analyses differing in the approach considered were carried out. Firstly, the potential association of the PrP genotype with disease (Salmonella resistance and production (wool and carcass traits was studied. The data used included 1042, 1043 and 1013 genotyped animals for the Salmonella resistance, wool and carcass traits, respectively. The different traits were analyzed using an animal model, where the PrP genotype effect was included as a fixed effect. Association analyses do not indicate any evidence of an effect of PrP genotypes on traits studied in this breed. Secondly, a quantitative trait loci (QTL detection approach using the PRNP gene as a marker was applied on ovine chromosome 13. Interval mapping was used. Evidence for one QTL affecting mean fiber diameter was found at 25 cM from the PRNP gene. However, a linkage between PRNP and this QTL does not imply unfavorable linkage disequilibrium for PRNP selection purposes.
A VGI data integration framework based on linked data model
Wan, Lin; Ren, Rongrong
2015-12-01
This paper aims at the geographic data integration and sharing method for multiple online VGI data sets. We propose a semantic-enabled framework for online VGI sources cooperative application environment to solve a target class of geospatial problems. Based on linked data technologies - which is one of core components of semantic web, we can construct the relationship link among geographic features distributed in diverse VGI platform by using linked data modeling methods, then deploy these semantic-enabled entities on the web, and eventually form an interconnected geographic data network to support geospatial information cooperative application across multiple VGI data sources. The mapping and transformation from VGI sources to RDF linked data model is presented to guarantee the unique data represent model among different online social geographic data sources. We propose a mixed strategy which combined spatial distance similarity and feature name attribute similarity as the measure standard to compare and match different geographic features in various VGI data sets. And our work focuses on how to apply Markov logic networks to achieve interlinks of the same linked data in different VGI-based linked data sets. In our method, the automatic generating method of co-reference object identification model according to geographic linked data is discussed in more detail. It finally built a huge geographic linked data network across loosely-coupled VGI web sites. The results of the experiment built on our framework and the evaluation of our method shows the framework is reasonable and practicable.
Quantitative magnetospheric models: results and perspectives.
Kuznetsova, M.; Hesse, M.; Gombosi, T.; Csem Team
Global magnetospheric models are indispensable tool that allow multi-point measurements to be put into global context Significant progress is achieved in global MHD modeling of magnetosphere structure and dynamics Medium resolution simulations confirm general topological pictures suggested by Dungey State of the art global models with adaptive grids allow performing simulations with highly resolved magnetopause and magnetotail current sheet Advanced high-resolution models are capable to reproduced transient phenomena such as FTEs associated with formation of flux ropes or plasma bubbles embedded into magnetopause and demonstrate generation of vortices at magnetospheric flanks On the other hand there is still controversy about the global state of the magnetosphere predicted by MHD models to the point of questioning the length of the magnetotail and the location of the reconnection sites within it For example for steady southwards IMF driving condition resistive MHD simulations produce steady configuration with almost stationary near-earth neutral line While there are plenty of observational evidences of periodic loading unloading cycle during long periods of southward IMF Successes and challenges in global modeling of magnetispheric dynamics will be addessed One of the major challenges is to quantify the interaction between large-scale global magnetospheric dynamics and microphysical processes in diffusion regions near reconnection sites Possible solutions to controversies will be discussed
Sun, Huabing; Fan, Heli; Peng, Xiaohua
2014-12-01
The coumarin analogues have been widely utilized in medicine, biology, biochemistry, and material sciences. Here, we report a detailed study on the reactivity of coumarins toward DNA. A series of coumarin analogues were synthesized and incorporated into oligodeoxynucleotides. A photoinduced [2 + 2] cycloaddition occurs between the coumarin moiety and the thymidine upon 350 nm irradiation forming both syn- and anti-cyclobutane adducts (17 and 18), which are photoreversible by 254/350 nm irradiation in DNA. Quantitative DNA interstrand cross-link (ICL) formation was observed with the coumarin moieties containing a flexible two-carbon or longer chain. DNA cross-linking by coumarins shows a kinetic preference when flanked by an A:T base pair as opposed to a G:C pair. An efficient photoinduced electron transfer between coumarin and dG slows down ICL formation. ICL formation quenches the fluorescence of coumarin, which, for the first time, enables fast, easy, and real-time monitoring of DNA cross-linking and photoreversibility via fluorescence spectroscopy. It can be used to detect the transversion mutation between pyrimidines and purines. Overall, this work provides new insights into the biochemical properties and possible toxicity of coumarins. A quantitative, fluorescence-detectable, and photoswitchable DNA cross-linking reaction of the coumarin moieties can potentially serve as mechanistic probes and tools for bioresearch without disrupting native biological environment.
Hazard Response Modeling Uncertainty (A Quantitative Method)
1988-10-01
ersio 114-11aiaiI I I I II L I ATINI Iri Ig IN Ig - ISI I I s InWLS I I I II I I I IWILLa RguOSmI IT$ INDS In s list INDIN I Im Ad inla o o ILLS I...OesotASII II I I I" GASau ATI im IVS ES Igo Igo INC 9 -U TIg IN ImS. I IgoIDI II i t I I ol f i isI I I I I I * WOOL ETY tGMIM (SU I YESMI jWM# GUSA imp I...is the concentration predicted by some component or model.P The variance of C /C is calculated and defined as var(Model I), where Modelo p I could be
The quantitative modelling of human spatial habitability
Wise, James A.
1988-01-01
A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.
A Qualitative and Quantitative Evaluation of 8 Clear Sky Models.
Bruneton, Eric
2016-10-27
We provide a qualitative and quantitative evaluation of 8 clear sky models used in Computer Graphics. We compare the models with each other as well as with measurements and with a reference model from the physics community. After a short summary of the physics of the problem, we present the measurements and the reference model, and how we "invert" it to get the model parameters. We then give an overview of each CG model, and detail its scope, its algorithmic complexity, and its results using the same parameters as in the reference model. We also compare the models with a perceptual study. Our quantitative results confirm that the less simplifications and approximations are used to solve the physical equations, the more accurate are the results. We conclude with a discussion of the advantages and drawbacks of each model, and how to further improve their accuracy.
Letort, Veronique; Cournède, Paul-Henry; De Reffye, Philippe; Courtois, Brigitte; 10.1093/aob/mcm197
2010-01-01
Background and Aims: Prediction of phenotypic traits from new genotypes under untested environmental conditions is crucial to build simulations of breeding strategies to improve target traits. Although the plant response to environmental stresses is characterized by both architectural and functional plasticity, recent attempts to integrate biological knowledge into genetics models have mainly concerned specific physiological processes or crop models without architecture, and thus may prove limited when studying genotype x environment interactions. Consequently, this paper presents a simulation study introducing genetics into a functional-structural growth model, which gives access to more fundamental traits for quantitative trait loci (QTL) detection and thus to promising tools for yield optimization. Methods: The GreenLab model was selected as a reasonable choice to link growth model parameters to QTL. Virtual genes and virtual chromosomes were defined to build a simple genetic model that drove the settings ...
Larimer, J. E.; Yanites, B.
2016-12-01
River morphology is a consequence of the erosive forces acting on the channel boundary and the resisting forces that limit erosion. For bedrock rivers, the erosive forces are generated by the stresses exerted by impacting sediment and flowing water, while the resisting forces are controlled by the internal strength regime of the local rock. We investigate the susceptibility of different rock types to different erosional processes (i.e. abrasion and plucking) and how changes in channel morphology reflect rock strength properties across lithologic boundaries. The bedrock rivers in the Prescott National Forest, AZ flow over a number of rock types with variable strength including sedimentary, igneous, and metamorphic lithologies providing a natural experiment to quantify the influence of rock strength on channel morphology. We collected bedrock samples and channel surveys from 12 different rock types. Rock-strength and rock-mass properties include compressive strength, tensile strength, fatigue strength, decimeter scale P-wave velocity (varies by 8-fold), Schmidt rebound value, fracture spacing, fracture aperture, and slake durability (as a proxy for weathering susceptibility. Morphological measurements include channel width, channel steepness (varies by 10-fold), and grain size distribution. To distinguish between the major mechanisms of erosion we measure bedrock surface roughness factor at the centimeter scale. Preliminary results show that channel steepness (ksn) increases with P-wave velocity while normalized channel width (kwn) decreases with P-wave velocity. We use these data to quantify scaling relationships of channel geometry with rock strength properties. We consider the results in the context of the driving mechanistic process to develop new quantitative understandings of how rock strength properties influence the efficiency of erosion processes and how rock strength is reflected in river morphology. By comparing the results among different rock types in a
Scalable Text and Link Analysis with Mixed-Topic Link Models
Zhu, Yaojia; Getoor, Lise; Moore, Cristopher
2013-01-01
Many data sets contain rich information about objects, as well as pairwise relations between them. For instance, in networks of websites, scientific papers, and other documents, each node has content consisting of a collection of words, as well as hyperlinks or citations to other nodes. In order to perform inference on such data sets, and make predictions and recommendations, it is useful to have models that are able to capture the processes which generate the text at each node and the links between them. In this paper, we combine classic ideas in topic modeling with a variant of the mixed-membership block model recently developed in the statistical physics community. The resulting model has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation-maximization algorithm. We test our model on three data sets, performing unsupervised topic classification and link prediction. For both task...
A Solved Model to Show Insufficiency of Quantitative Adiabatic Condition
Institute of Scientific and Technical Information of China (English)
LIU Long-Jiang; LIU Yu-Zhen; TONG Dian-Min
2009-01-01
The adiabatic theorem is a useful tool in processing quantum systems slowly evolving,but its practical application depends on the quantitative condition expressed by Hamiltonian's eigenvalues and eigenstates,which is usually taken as a sufficient condition.Recently,the sumciency of the condition was questioned,and several counterex amples have been reported.Here we present a new solved model to show the insufficiency of the traditional quantitative adiabatic condition.
Extended model of restricted beam for FSO links
Poliak, Juraj; Wilfert, Otakar
2012-10-01
Modern wireless optical communication systems in many aspects overcome wire or radio communications. Their advantages are license-free operation and broad bandwidth that they offer. The medium in free-space optical (FSO) links is the atmosphere. Operation of outdoor FSO links struggles with many atmospheric phenomena that deteriorate phase and amplitude of the transmitted optical beam. This beam originates in the transmitter and is affected by its individual parts, especially by the lens socket and the transmitter aperture, where attenuation and diffraction effects take place. Both of these phenomena unfavourable influence the beam and cause degradation of link availability, or its total malfunction. Therefore, both of these phenomena should be modelled and simulated, so that one can judge the link function prior to the realization of the system. Not only the link availability and reliability are concerned, but also economic aspects. In addition, the transmitted beam is not, generally speaking, circularly symmetrical, what makes the link simulation more difficult. In a comprehensive model, it is necessary to take into account the ellipticity of the beam that is restricted by circularly symmetrical aperture where then the attenuation and diffraction occur. General model is too computationally extensive; therefore simplification of the calculations by means of analytical and numerical approaches will be discussed. Presented model is not only simulated using computer, but also experimentally proven. One can then deduce the ability of the model to describe the reality and to estimate how far can one go with approximations, i.e. limitations of the model are discussed.
Quantitative sociodynamics stochastic methods and models of social interaction processes
Helbing, Dirk
1995-01-01
Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...
Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes
Helbing, Dirk
2010-01-01
This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...
Linking spatial and dynamic models for traffic maneuvers
DEFF Research Database (Denmark)
Olderog, Ernst-Rüdiger; Ravn, Anders Peter; Wisniewski, Rafal
2015-01-01
For traffic maneuvers of multiple vehicles on highways we build an abstract spatial and a concrete dynamic model. In the spatial model we show the safety (collision freedom) of lane-change maneuvers. By linking the spatial and dynamic model via suitable refinements of the spatial atoms to distance...
Quach, Thien; Tippens, Melissa; Szlam, Fania; Van Dyke, Rebecca; Levy, Jerrold H; Csete, Marie
2004-01-01
Analysis of the effectiveness of antifibrinolytic therapy for liver transplant recipients is hampered by lack of quantitative assays for assessing drug effects. We adapted chemical engineering tools used in polymerization studies to quantify fibrinogen cross-linking by plasma from liver transplant patients obtained before and after epsilon aminocaproic acid (EACA) therapy. A target fluorescein isothiocyanate-fibrinogen (FITC-fibrinogen) molecule was constructed; it fluoresces in a quantifiable pattern when in solution, and undergoes cross-linking in the presence of plasmin inhibitors. Cross-linking quenches the fluorescent signal, and the quenching is a quantifiable endpoint. Thus fluorescence from this reporter molecule can be used to assess functional improvement in fibrinogen cross-linking as a result of antifibrinolytic therapies, and it is sensitive to picomolar amounts of plasmin inhibitors and activators. Cross-linking of FITC-fibrinogen by patient plasma, before and after EACA therapy, was assessed using fluorescence spectrometry. Fluorescence patterns from FITC-fibrinogen indicated no significant cross-linking of the target fibrinogen as a consequence of EACA in posttreatment plasma. When the fibrinogen-FITC target was assayed without plasma in the presence of EACA at concentrations that bracket therapeutic levels (100 and 400 microg/ml), significant fluorescence quenching (target FITC-fibrinogen cross-linking) was achieved. These results suggest that fibrinogen-FITC fluorescence is sensitive enough to detect EACA activity in clinically relevant ranges, but that EACA given in usual doses is insufficient to promote fibrinogen cross-linking in patients with end-stage liver disease.
Modeling quantitative phase image formation under tilted illuminations.
Bon, Pierre; Wattellier, Benoit; Monneret, Serge
2012-05-15
A generalized product-of-convolution model for simulation of quantitative phase microscopy of thick heterogeneous specimen under tilted plane-wave illumination is presented. Actual simulations are checked against a much more time-consuming commercial finite-difference time-domain method. Then modeled data are compared with experimental measurements that were made with a quadriwave lateral shearing interferometer.
A general route diversity model for convergent terrestrial microwave links
Paulson, Kevin S.; Usman, Isa S.; Watson, Robert J.
2006-06-01
This research examines route diversity as a fade mitigation technique in the presence of rain for convergent, terrestrial, microwave links. A general model is derived which predicts the joint distribution of rain attenuation on arbitrary pairs of convergent microwave links, directly from the link parameters. It is assumed that pairs of links have joint rain attenuation distributions that are bilognormally distributed. Four of the five distribution parameters can be estimated from International Telecommunication Union recommendation models. A maximum likelihood estimation method was used in a previous paper to estimate the fifth parameter, that is, the covariance or correlation. In this paper an empirical model is reported, linking the correlation of log rain fade with the geometry and radio parameters of the pair of links. From these distributions, the advantage due to route diversity may be calculated for arbitrary fade margins. Furthermore, the predicted diversity statistics vary smoothly and yield plausible extrapolations into low-probability scenarios. Diversity improvement is calculated for a set of example link scenarios.
Generalized PSF modeling for optimized quantitation in PET imaging
Ashrafinia, Saeed; Mohy-ud-Din, Hassan; Karakatsanis, Nicolas A.; Jha, Abhinav K.; Casey, Michael E.; Kadrmas, Dan J.; Rahmim, Arman
2017-06-01
Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF
Analysis of sensory ratings data with cumulative link models
DEFF Research Database (Denmark)
Christensen, Rune Haubo Bojesen; Brockhoff, Per B.
2013-01-01
Examples of categorical rating scales include discrete preference, liking and hedonic rating scales. Data obtained on these scales are often analyzed with normal linear regression methods or with omnibus Pearson chi2 tests. In this paper we propose to use cumulative link models that allow...... for regression methods similar to linear models while respecting the categorical nature of the observations. We describe how cumulative link models are related to the omnibus chi2 tests and how they can lead to more powerful tests in the non-replicated setting. For replicated categorical ratings data we present...... a quasi-likelihood approach and a mixed effects approach both being extensions of cumulative link models. We contrast population-average and subject-specific interpretations based on these models and discuss how different approaches lead to different tests. In replicated settings, naive tests that ignore...
Refining the quantitative pathway of the Pathways to Mathematics model.
Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda
2015-03-01
In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. Copyright © 2014 Elsevier Inc. All rights reserved.
Boersma, Kate S; Dee, Laura E; Miller, Steve J; Bogan, Michael T; Lytle, David A; Gitelman, Alix I
2016-03-01
Functional trait analysis is an appealing approach to study differences among biological communities because traits determine species' responses to the environment and their impacts on ecosystem functioning. Despite a rapidly expanding quantitative literature, it remains challenging to conceptualize concurrent changes in multiple trait dimensions ("trait space") and select quantitative functional diversity methods to test hypotheses prior to analysis. To address this need, we present a widely applicable framework for visualizing ecological phenomena in trait space to guide the selection, application, and interpretation of quantitative functional diversity methods. We describe five hypotheses that represent general patterns of responses to disturbance in functional community ecology and then apply a formal decision process to determine appropriate quantitative methods to test ecological hypotheses. As a part of this process, we devise a new statistical approach to test for functional turnover among communities. Our combination of hypotheses and metrics can be applied broadly to address ecological questions across a range of systems and study designs. We illustrate the framework with a case study of disturbance in freshwater communities. This hypothesis-driven approach will increase the rigor and transparency of applied functional trait studies.
Amplification efficiency: linking baseline and bias in the analysis of quantitative PCR data
Ruijter, J.M.; Ramakers, C.; Hoogaars, W.M.H.; Karlen, Y.; Bakker, O.; van den Hoff, M.J.B.; Moorman, A.F.M.
2009-01-01
Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline es
Fine mapping of quantitative trait loci using linkage disequilibria with closely linked marker loci
Meuwissen, T.H.E.; Goddard, M.E.
2000-01-01
A multimarker linkage disequilibrium mapping method was developed for the fine mapping of quantitative trait loci (QTL) using a dense marker map. The method compares the expected covariances between haplotype effects given a postulated QTL position to the covariances that are found in the data. The
Ghekiere, An; Fenske, Martina; Verslycke, Tim; Tyler, Charles; Janssen, Colin
2005-09-01
Mysid crustaceans have been put forward by several regulatory bodies as suitable test organisms to screen and test the potential effects of environmental endocrine disruptors. Despite the well-established use of mysid reproductive endpoints such as fecundity, egg development time, and time to first brood release in standard toxicity testing, little information exists on the hormonal regulation of these processes. Control of vitellogenesis is being studied intensively because yolk is an excellent model for studying mechanisms of hormonal control, and vitellogenesis can be chemically disrupted. Yolk protein or vitellin is a major source of nourishment during embryonic development of ovigorous egg-laying invertebrates. The accumulation of vitellin during oocyte development is vital for the production of viable offspring. In this context, we developed a competitive enzyme-linked immunosorbent assay (ELISA) for vitellin of the estuarine mysid Neomysis integer. Mysid vitellin was isolated using gel filtration, and the purified vitellin was used to raise polyclonal antibodies. The ELISA was sensitive within a working range of 4 to 500 ng vitellin/mL. Serial dilutions of whole body homogenates from female N. integer and the vitellin standard showed parallel binding curves, validating the specificity of the ELISA. The intra- and interassay coefficients of variation were 8.2% and 13.8%, respectively. Mysid vitellin concentrations were determined from ovigorous females and eggs at different developmental stages. The availability of a quantitative mysid vitellin ELISA should stimulate further studies on the basic biology of this process in mysids. Furthermore, it could provide a means to better understand and predict chemically induced reproductive effects in mysids.
Model and Implementation of Communication Link Management Supporting High Availability
Institute of Scientific and Technical Information of China (English)
Luo Juan; Cao Yang; He Zheng; Li Feng
2004-01-01
Despite the rapid evolution in all aspects of computer technology, both the computer hardware and software are prone to numerous failure conditions. In this paper, we analyzed the characteristic of a computer system and the methods of constructing a system , proposed a communication link management model supporting high availability for network applications, Which will greatly increase the high availability of network applications. Then we elaborated on heartbeat or service detect, fail-over, service take-over, switchback and error recovery process of the model. In the process of constructing the communication link, we implemented the link management and service take-over with high availability requirement, and discussed the state and the state transition of building the communication link between the hosts, depicted the message transfer and the start of timer. At Last, we applied the designed high availability system to a network billing system, and showed how the system was constructed and implemented, which perfectly satisfied the system requirements.
Modeling of Atmospheric Turbulence Effect on Terrestrial FSO Link
Directory of Open Access Journals (Sweden)
A. Prokes
2009-04-01
Full Text Available Atmospheric turbulence results in many effects causing fluctuation in the received optical power. Terrestrial laser beam communication is affected above all by scintillations. The paper deals with modeling the influence of scintillation on link performance, using the modified Rytov theory. The probability of correct signal detection in direct detection system in dependence on many parameters such as link distance, power link margin, refractive-index structure parameter, etc. is discussed and different approaches to the evaluation of scintillation effect are compared. The simulations are performed for a horizontal-path propagation of the Gaussian-beam wave.
Port-Based Modeling of a Flexible Link
Macchelli, A.; Melchiorri, C.; Stramigioli, S.
2007-01-01
In this paper, a simple way to model flexible robotic links is presented. This is different from classical approaches and from the Euler–Bernoulli or Timoshenko theory, in that the proposed model is able to describe large deflections in 3-D space and does not rely on any finite-dimensional approxima
Linking knowledge and action through mental models of sustainable agriculture.
Hoffman, Matthew; Lubell, Mark; Hillis, Vicken
2014-09-01
Linking knowledge to action requires understanding how decision-makers conceptualize sustainability. This paper empirically analyzes farmer "mental models" of sustainability from three winegrape-growing regions of California where local extension programs have focused on sustainable agriculture. The mental models are represented as networks where sustainability concepts are nodes, and links are established when a farmer mentions two concepts in their stated definition of sustainability. The results suggest that winegrape grower mental models of sustainability are hierarchically structured, relatively similar across regions, and strongly linked to participation in extension programs and adoption of sustainable farm practices. We discuss the implications of our findings for the debate over the meaning of sustainability, and the role of local extension programs in managing knowledge systems.
Modeling Logistic Performance in Quantitative Microbial Risk Assessment
Rijgersberg, H.; Tromp, S.O.; Jacxsens, L.; Uyttendaele, M.
2010-01-01
In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage ti
Quantitative modelling in design and operation of food supply systems
Beek, van P.
2004-01-01
During the last two decades food supply systems not only got interest of food technologists but also from the field of Operations Research and Management Science. Operations Research (OR) is concerned with quantitative modelling and can be used to get insight into the optimal configuration and opera
A GPGPU accelerated modeling environment for quantitatively characterizing karst systems
Myre, J. M.; Covington, M. D.; Luhmann, A. J.; Saar, M. O.
2011-12-01
The ability to derive quantitative information on the geometry of karst aquifer systems is highly desirable. Knowing the geometric makeup of a karst aquifer system enables quantitative characterization of the systems response to hydraulic events. However, the relationship between flow path geometry and karst aquifer response is not well understood. One method to improve this understanding is the use of high speed modeling environments. High speed modeling environments offer great potential in this regard as they allow researchers to improve their understanding of the modeled karst aquifer through fast quantitative characterization. To that end, we have implemented a finite difference model using General Purpose Graphics Processing Units (GPGPUs). GPGPUs are special purpose accelerators which are capable of high speed and highly parallel computation. The GPGPU architecture is a grid like structure, making it is a natural fit for structured systems like finite difference models. To characterize the highly complex nature of karst aquifer systems our modeling environment is designed to use an inverse method to conduct the parameter tuning. Using an inverse method reduces the total amount of parameter space needed to produce a set of parameters describing a system of good fit. Systems of good fit are determined with a comparison to reference storm responses. To obtain reference storm responses we have collected data from a series of data-loggers measuring water depth, temperature, and conductivity at locations along a cave stream with a known geometry in southeastern Minnesota. By comparing the modeled response to those of the reference responses the model parameters can be tuned to quantitatively characterize geometry, and thus, the response of the karst system.
Link performance model for filter bank based multicarrier systems
Petrov, Dmitry; Oborina, Alexandra; Giupponi, Lorenza; Stitz, Tobias Hidalgo
2014-12-01
This paper presents a complete link level abstraction model for link quality estimation on the system level of filter bank multicarrier (FBMC)-based networks. The application of mean mutual information per coded bit (MMIB) approach is validated for the FBMC systems. The considered quality measure of the resource element for the FBMC transmission is the received signal-to-noise-plus-distortion ratio (SNDR). Simulation results of the proposed link abstraction model show that the proposed approach is capable of estimating the block error rate (BLER) accurately, even when the signal is propagated through the channels with deep and frequent fades, as it is the case for the 3GPP Hilly Terrain (3GPP-HT) and Enhanced Typical Urban (ETU) models. The FBMC-related results of link level simulations are compared with cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) analogs. Simulation results are also validated through the comparison to reference publicly available results. Finally, the steps of link level abstraction algorithm for FBMC are formulated and its application for system level simulation of a professional mobile radio (PMR) network is discussed.
Synthesis and characterization of new 5-linked pinoresinol lignin models.
Yue, Fengxia; Lu, Fachuang; Sun, Runcang; Ralph, John
2012-12-14
Pinoresinol structures, featuring a β-β'-linkage between lignin monomer units, are important in softwood lignins and in dicots and monocots, particularly those that are downregulated in syringyl-specific genes. Although readily detected by NMR spectroscopy, pinoresinol structures largely escaped detection by β-ether-cleaving degradation analyses presumably due to the presence of the linkages at the 5 positions, in 5-5'- or 5-O-4'-structures. In this study, which is aimed at helping better understand 5-linked pinoresinol structures by providing the required data for NMR characterization, new lignin model compounds were synthesized through biomimetic peroxidase-mediated oxidative coupling reactions between pre-formed (free-phenolic) coniferyl alcohol 5-5'- or 5-O-4'-linked dimers and a coniferyl alcohol monomer. It was found that such dimers containing free-phenolic coniferyl alcohol moieties can cross-couple with the coniferyl alcohol producing pinoresinol-containing trimers (and higher oligomers) in addition to other homo- and cross-coupled products. Eight new lignin model compounds were obtained and characterized by NMR spectroscopy, and one tentatively identified cross-coupled β-O-4'-product was formed from a coniferyl alcohol 5-O-4'-linked dimer. It was demonstrated that the 5-5'- and 5-O-4'-linked pinoresinol structures could be readily differentiated by using heteronuclear multiple-bond correlation (HMBC) NMR spectroscopy. With appropriate modification (etherification or acetylation) to the newly obtained model compounds, it would be possible to identify the 5-5'- or 5-O-4'-linked pinoresinol structures in softwood lignins by 2D HMBC NMR spectroscopic methods. Identification of the cross-coupled dibenzodioxocin from a coniferyl alcohol 5-5'-linked moiety suggested that thioacidolysis or derivatization followed by reductive cleavage (DFRC) could be used to detect and identify whether the coniferyl alcohol itself undergoes 5-5'-cross-linking during
Cowan, C M; Dentine, M R; Ax, R L; Schuler, L A
1990-05-01
Digestion of genomic DNA with the restriction endonuclease Avail disclosed a probable insertion deletion of approximately 200 base pairs (bp) near the prolactin gene. Two alleles were apparent as three distinct hybridization patterns. These alleles were statistically associated with quantitative trait loci among sons of one elite Holstein sire family. The favorable genotype was correlated with the presence of a 1.15-kb hybridization band inherited from the sire when genomic DNA was probed with a full-length cDNA for prolactin. Pedigree estimates of genetic merit among genotypes were similar, differing by only 19.3 kg for milk in ancestor merit. Comparisons of genetic estimates for quantitative yield traits in offspring of this heterozygous sire showed significant (Pcheese yield dollars, and protein dollars. The estimated differences between homozygous genotypes for USDA Transmitting Abilities of PDM, PD$, Cheese Yield $ and Protein $ were 282.93 kg, $74.35, $48.58 and $53.67, respectively. However, the estimated breeding values from progeny ranged over 900 kg in transmitting ability for milk. Frequency of the favorable marker allele was estimated to be 0.231 in the elite cow population used as dams of sons. These results demonstrate the potential of molecular biological techniques to discriminate between individuals within a family and to predict breeding values for selection schemes.
Reservoir Stochastic Modeling Constrained by Quantitative Geological Conceptual Patterns
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
This paper discusses the principles of geologic constraints on reservoir stochastic modeling. By using the system science theory, two kinds of uncertainties, including random uncertainty and fuzzy uncertainty, are recognized. In order to improve the precision of stochastic modeling and reduce the uncertainty in realization, the fuzzy uncertainty should be stressed, and the "geological genesis-controlled modeling" is conducted under the guidance of a quantitative geological pattern. An example of the Pingqiao horizontal-well division of the Ansai Oilfield in the Ordos Basin is taken to expound the method of stochastic modeling.
Lessons Learned from Quantitative Dynamical Modeling in Systems Biology
Bachmann, Julie; Matteson, Andrew; Schelke, Max; Kaschek, Daniel; Hug, Sabine; Kreutz, Clemens; Harms, Brian D.; Theis, Fabian J.; Klingmüller, Ursula; Timmer, Jens
2013-01-01
Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here. PMID:24098642
Lessons learned from quantitative dynamical modeling in systems biology.
Directory of Open Access Journals (Sweden)
Andreas Raue
Full Text Available Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here.
Dynamic modeling of flexible-links planar parallel robots
Institute of Scientific and Technical Information of China (English)
2008-01-01
This paper presents a finite element-based method for dynamic modeling of parallel robots with flexible links and rigid moving platform.The elastic displacements of flexible links are investigated while considering the coupling effects between links due to the structural flexibility.The kinematic constraint conditions and dynamic constraint conditions for elastic displacements are presented.Considering the effects of distributed mass,lumped mass,shearing deformation,bending deformation,tensile deformation and lateral displacements,the Kineto-Elasto dynamics (KED) theory and Lagrange formula are used to derive the dynamic equations of planar flexible-links parallel robots.The dynamic behavior of the flexible-links planar parallel robot is well illustrated through numerical simulation of a planar 3-RRR parallel robot.Compared with the results of finite element software SAMCEF,the numerical simulation results show good coherence of the proposed method.The flexibility of links is demonstrated to have a significant impact on the position error and orientation error of the flexiblelinks planar parallel robot.
DEFF Research Database (Denmark)
Rønne, E; Behrendt, N; Ploug, M;
1994-01-01
variant of uPAR, suPAR, has been constructed by recombinant technique and the protein content of a purified suPAR standard preparation was determined by amino acid composition analysis. The sensitivity of the assay (0.6 ng uPAR/ml) is strong enough to measure uPAR in extracts of cultured cells and cancer......Binding of the urokinase plasminogen activator (uPA) to a specific cell surface receptor (uPAR) plays a crucial role in proteolysis during tissue remodelling and cancer invasion. An immunosorbent assay for the quantitation of uPAR has now been developed. This assay is based on two monoclonal...... tissue. Recent studies have shown that a high uPA level in tumor extracts is in some cancers associated with poor prognosis. The present assay will now allow similar prognostic studies of uPAR levels....
Dijkstra, Hildebrand; Oudkerk, Matthijs; Kappert, Peter; Sijens, Paul E.
Purpose: To investigate if intravoxel incoherent motion (IVIM) modeled diffusion-weighted imaging (DWI) can be linked to contrast-enhanced (CE-)MRI in liver parenchyma and liver lesions. Methods: Twenty-five patients underwent IVIM-DWI followed by multiphase CE-MRI using Gd-EOB-DTPA (n = 20) or
Single photon time transfer link model for GNSS satellites
Vacek, Michael; Michalek, Vojtech; Peca, Marek; Prochazka, Ivan; Blazej, Josef
2015-05-01
The importance of optical time transfer serving as a complement to traditional microwave links, has been attested for GNSSes and for scientific missions. Single photon time transfer (SPTT) is a process, allowing to compare (subtract) time readings of two distant clocks. Such a comparison may be then used to synchronize less accurate clock to a better reference, to perform clock characterization and calibration, to calculate mean time out of ensemble of several clocks, displaced in space. The single-photon time transfer is well established in field of space geodesy, being supported by passive retro-reflectors within space segment of five known GNSSes. A truly two-way, active terminals work aboard of Jason-2 (T2L2) - multiphoton operation, GNSS Beidou (Compass) - SPTT, and are going to be launched within recent ACES project (ELT) - SPTT, and GNSS GLONASS - multiphoton operation. However, there is still missing comprehensive theoretical model of two-way (using satellite receiver and retroreflector) SPTT link incorporating all crucial parameters of receiver (both ground and space segment receivers), transmitter, atmosphere effects on uplink and downlink path, influence of retroreflector. The input to calculation of SPTT link performance will be among others: link budget (distance, power, apertures, beam divergence, attenuation, scattering), propagating medium (atmosphere scintillation, beam wander, etc.), mutual Tx/Rx velocity, wavelength. The SPTT model will be evaluated without the properties of real components. These will be added in the further development. The ground-to-space SPTT link performance of typical scenarios are modeled. This work is a part of the ESA study "Comparison of optical time-transfer links."
Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.
Moray, Neville; Groeger, John; Stanton, Neville
2017-02-01
This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.
A quantitative comparison of Calvin-Benson cycle models.
Arnold, Anne; Nikoloski, Zoran
2011-12-01
The Calvin-Benson cycle (CBC) provides the precursors for biomass synthesis necessary for plant growth. The dynamic behavior and yield of the CBC depend on the environmental conditions and regulation of the cellular state. Accurate quantitative models hold the promise of identifying the key determinants of the tightly regulated CBC function and their effects on the responses in future climates. We provide an integrative analysis of the largest compendium of existing models for photosynthetic processes. Based on the proposed ranking, our framework facilitates the discovery of best-performing models with regard to metabolomics data and of candidates for metabolic engineering.
A mathematical model of N-linked glycoform biosynthesis.
Umaña, P; Bailey, J E
1997-09-20
Metabolic engineering of N-linked oligosaccharide biosynthesis to produce novel glycoforms or glycoform distributions of a recombinant glycoprotein can potentially lead to an improved therapeutic performance of the glycoprotein product. Effective engineering of this pathway to maximize the fractions of beneficial glycoforms within the glycoform population of a target glycoprotein can be aided by a mathematical model of the N-linked glycosylation process. A mathematical model is presented here, whose main function is to calculate the expected qualitative trends in the N-linked oligosaccharide distribution resulting from changes in the levels of one or more enzymes involved in the network of enzyme-catalyzed reactions that accomplish N-linked oligosaccharide biosynthesis. It consists of mass balances for 33 different oligosaccharide species N-linked to a specified protein that is being transported through the different compartments of the Golgi complex. Values of the model parameters describing Chinese hamster ovary (CHO) cells were estimated from literature information. A basal set of kinetic parameters for the enzyme-catalyzed reactions acting on free oligosaccharide substrates was also obtained from the literature. The solution of the system for this basal set of parameters gave a glycoform distribution consisting mainly of complex-galactosylated oligosaccharides distributed in structures with different numbers of antennae in a fashion similar to that observed for various recombinant proteins produced in CHO cells. Other simulations indicate that changes in the oligosaccharide distribution could easily result from alteration in glycoprotein productivity within the range currently attainable in industry. The overexpression of N-acetylglucosaminyltransferase III in CHO cells was simulated under different conditions to test the main function of the model. These simulations allow a comparison of different strategies, such as simultaneous overexpression of several
Quantitative modelling in cognitive ergonomics: predicting signals passed at danger
Moray, Neville; Groeger, John; Stanton, Neville
2016-01-01
This paper shows how to combine field observations, experimental data, and mathematical modeling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example we consider a major railway accident. In 1999 a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, "black box" data, and accident and engineering reports, to construct a case history of the accident. We show how t...
Quantitative metal magnetic memory reliability modeling for welded joints
Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng
2016-03-01
Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.
Model updating in flexible-link multibody systems
Belotti, R.; Caneva, G.; Palomba, I.; Richiedei, D.; Trevisani, A.
2016-09-01
The dynamic response of flexible-link multibody systems (FLMSs) can be predicted through nonlinear models based on finite elements, to describe the coupling between rigid- body and elastic behaviour. Their accuracy should be as high as possible to synthesize controllers and observers. Model updating based on experimental measurements is hence necessary. By taking advantage of the experimental modal analysis, this work proposes a model updating procedure for FLMSs and applies it experimentally to a planar robot. Indeed, several peculiarities of the model of FLMS should be carefully tackled. On the one hand, nonlinear models of a FLMS should be linearized about static equilibrium configurations. On the other, the experimental mode shapes should be corrected to be consistent with the elastic displacements represented in the model, which are defined with respect to a fictitious moving reference (the equivalent rigid link system). Then, since rotational degrees of freedom are also represented in the model, interpolation of the experimental data should be performed to match the model displacement vector. Model updating has been finally cast as an optimization problem in the presence of bounds on the feasible values, by also adopting methods to improve the numerical conditioning and to compute meaningful updated inertial and elastic parameters.
Mutual information model for link prediction in heterogeneous complex networks
Shakibian, Hadi; Moghadam Charkari, Nasrollah
2017-01-01
Recently, a number of meta-path based similarity indices like PathSim, HeteSim, and random walk have been proposed for link prediction in heterogeneous complex networks. However, these indices suffer from two major drawbacks. Firstly, they are primarily dependent on the connectivity degrees of node pairs without considering the further information provided by the given meta-path. Secondly, most of them are required to use a single and usually symmetric meta-path in advance. Hence, employing a set of different meta-paths is not straightforward. To tackle with these problems, we propose a mutual information model for link prediction in heterogeneous complex networks. The proposed model, called as Meta-path based Mutual Information Index (MMI), introduces meta-path based link entropy to estimate the link likelihood and could be carried on a set of available meta-paths. This estimation measures the amount of information through the paths instead of measuring the amount of connectivity between the node pairs. The experimental results on a Bibliography network show that the MMI obtains high prediction accuracy compared with other popular similarity indices. PMID:28344326
QuantUM: Quantitative Safety Analysis of UML Models
Directory of Open Access Journals (Sweden)
Florian Leitner-Fischer
2011-07-01
Full Text Available When developing a safety-critical system it is essential to obtain an assessment of different design alternatives. In particular, an early safety assessment of the architectural design of a system is desirable. In spite of the plethora of available formal quantitative analysis methods it is still difficult for software and system architects to integrate these techniques into their every day work. This is mainly due to the lack of methods that can be directly applied to architecture level models, for instance given as UML diagrams. Also, it is necessary that the description methods used do not require a profound knowledge of formal methods. Our approach bridges this gap and improves the integration of quantitative safety analysis methods into the development process. All inputs of the analysis are specified at the level of a UML model. This model is then automatically translated into the analysis model, and the results of the analysis are consequently represented on the level of the UML model. Thus the analysis model and the formal methods used during the analysis are hidden from the user. We illustrate the usefulness of our approach using an industrial strength case study.
van Grootel, Leonie; van Wesel, Floryt; O'Mara-Eves, Alison; Thomas, James; Hox, Joop; Boeije, Hennie
2017-09-01
This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the qualitative evidence synthesis against interventions in primary quantitative studies. However, types of qualitative evidence syntheses that are associated with theory building generate theoretical models instead of recommendations. Therefore, the output from these types of qualitative evidence syntheses cannot directly be used for the matrix approach but requires transformation. This approach allows for the transformation of these types of output. The approach enables the inference of moderation effects instead of direct effects from the theoretical model developed in a qualitative evidence synthesis. Recommendations for practice are formulated on the basis of interactional relations inferred from the qualitative evidence synthesis. In doing so, we apply the realist perspective to model variables from the qualitative evidence synthesis according to the context-mechanism-outcome configuration. A worked example shows that it is possible to identify recommendations from a theory-building qualitative evidence synthesis using the realist perspective. We created subsets of the interventions from primary quantitative studies based on whether they matched the recommendations or not and compared the weighted mean effect sizes of the subsets. The comparison shows a slight difference in effect sizes between the groups of studies. The study concludes that the approach enhances the applicability of the matrix approach. Copyright © 2017 John Wiley & Sons, Ltd.
Revisiting Link Prediction: Evolving Models and Real Data Findings
Mendoza, Marcelo
2016-01-01
The explosive growth of Web 2.0, which was characterized by the creation of online social networks, has reignited the study of factors that could help us understand the growth and dynamism of these networks. Various generative network models have been proposed, including the Barabasi-Albert and Watts-Strogatz models. In this study, we revisit the problem from a perspective that seeks to compare results obtained from these generative models with those from real networks. To this end, we consider the dating network Skout Inc. An analysis is performed on the topological characteristics of the network that could explain the creation of new network links. Afterwards, the results are contrasted with those obtained from the Barabasi-Albert and Watts-Strogatz generative models. We conclude that a key factor that could explain the creation of links originates in its cluster structure, where link recommendations are more precise in Watts-Strogatz segmented networks than in Barabasi-Albert hierarchical networks. This re...
Content Linking for UGC based on Word Embedding Model
Directory of Open Access Journals (Sweden)
Zhiqiao Gao
2015-09-01
Full Text Available There are huge amounts of User Generated Contents (UGCs consisting of authors’ articles of different themes and readers’ on-line comments on social networks every day. Generally, an article often gives rise to thousands of readers’ comments, which are related to specific points of the originally published article or previous comments. Hence it has suggested the urgent need for automated methods to implement the content linking task, which can also help other related applications, such as information retrieval, summarization and content management. So far content linking is still a relatively new issue. Because of the unsatisfactory of traditional ways based on feature extraction, we look forward to using deeper textual semantic analysis. The Word Embedding model based on deep learning has performed well in Natural Language Processing (NLP, especially in mining deep semantic information recently. Therefore, we study further on the Word Embedding model trained by different neural network models from which we can learn the structure, principles and training ways of the neural network language model in more depth to complete deep semantic feature extraction. With the aid of the semantic features, we expect to do further research on content linking between comments and their original articles from social networks, and finally verify the validity of the proposed method by comparison with traditional ways based on feature extraction.
Quantitative magnetospheric models derived from spacecraft magnetometer data
Mead, G. D.; Fairfield, D. H.
1973-01-01
Quantitative models of the external magnetospheric field were derived by making least-squares fits to magnetic field measurements from four IMP satellites. The data were fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the models contain the effects of seasonal north-south asymmetries. The expansions are divergence-free, but unlike the usual scalar potential expansions, the models contain a nonzero curl representing currents distributed within the magnetosphere. Characteristics of four models are presented, representing different degrees of magnetic disturbance as determined by the range of Kp values. The latitude at the earth separating open polar cap field lines from field lines closing on the dayside is about 5 deg lower than that determined by previous theoretically-derived models. At times of high Kp, additional high latitude field lines are drawn back into the tail.
Munro, James L; Boon, Virginia A
2010-02-10
Recombinant bovine somatotropin (rbST), also known as growth hormone, is used to enhance production and development of animals within the agriculture and aquaculture industries. Its use is controversial because of its potential effects on human and animal health. To screen for rbST in shrimp feed, a competitive enzyme-linked immunosorbent assay (ELISA) with an inhibition step was developed. Sample and rbST antibody (rabbit anti-rbST) were incubated at room temperature for 30 min. Subsequently, this competitive reaction was transferred to a microplate coated with rbST, using goat antirabbit IgG linked with horseradish peroxidise as the secondary antibody. Substrates for peroxidise were added, and the absorbance at 410 nm was determined. The applicability of the method was assessed using rbST extracted from "spiked" shrimp feed samples. The assay was reproducible and linear with R(2) values greater than 0.98 over the standard curve range of 20-500 microg/g. The intra- and interday precisions expressed as relative standard deviations were 3.4 and 5.3%, respectively. The mean recovery from 15 spiked feed samples was 105%. This assay will be a valuable tool for quantitative detection of rbST by both governments and commercial companies and can be modified for other types of feed.
Directory of Open Access Journals (Sweden)
Sungkyu Park
Full Text Available Cyclotides are a family of plant-derived proteins that are characterized by a cyclic backbone and a knotted disulfide topology. Their cyclic cystine knot (CCK motif makes them exceptionally resistant to thermal, chemical, and enzymatic degradation. Cyclotides exert much of their biological activity via interactions with cell membranes. In this work, we qualitatively and quantitatively analyze the cytotoxic and anthelmintic membrane activities of cyclotides. The qualitative and quantitative models describe the potency of cyclotides using four simple physicochemical terms relevant to membrane contact. Specifically, surface areas of the cyclotides representing lipophilic and hydrogen bond donating properties were quantified and their distribution across the molecular surface was determined. The resulting quantitative structure-activity relation (QSAR models suggest that the activity of the cyclotides is proportional to their lipophilic and positively charged surface areas, provided that the distribution of these surfaces is asymmetric. In addition, we qualitatively analyzed the physicochemical differences between the various cyclotide subfamilies and their effects on the cyclotides' orientation on the membrane and membrane activity.
Asynchronous adaptive time step in quantitative cellular automata modeling
Directory of Open Access Journals (Sweden)
Sun Yan
2004-06-01
Full Text Available Abstract Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment.
Modeling online social networks based on preferential linking
Institute of Scientific and Technical Information of China (English)
Hu Hai-Bo; Guo Jin-Li; Chen Jun
2012-01-01
We study the phenomena of preferential linking in a large-scale evolving online social network and find that the linear preference holds for preferential creation,preferential acceptance,and preferential attachment.Based on the linear preference,we propose an analyzable model,which illustrates the mechanism of network growth and reproduces the process of network evolution.Our simulations demonstrate that the degree distribution of the network produced by the model is in good agreement with that of the real network.This work provides a possible bridge between the micro-mechanisms of network growth and the macrostructures of online social networks.
Model for Quantitative Evaluation of Enzyme Replacement Treatment
Directory of Open Access Journals (Sweden)
Radeva B.
2009-12-01
Full Text Available Gaucher disease is the most frequent lysosomal disorder. Its enzyme replacement treatment was the new progress of modern biotechnology, successfully used in the last years. The evaluation of optimal dose of each patient is important due to health and economical reasons. The enzyme replacement is the most expensive treatment. It must be held continuously and without interruption. Since 2001, the enzyme replacement therapy with Cerezyme*Genzyme was formally introduced in Bulgaria, but after some time it was interrupted for 1-2 months. The dose of the patients was not optimal. The aim of our work is to find a mathematical model for quantitative evaluation of ERT of Gaucher disease. The model applies a kind of software called "Statistika 6" via the input of the individual data of 5-year-old children having the Gaucher disease treated with Cerezyme. The output results of the model gave possibilities for quantitative evaluation of the individual trends in the development of the disease of each child and its correlation. On the basis of this results, we might recommend suitable changes in ERT.
Quantitative modeling and data analysis of SELEX experiments
Djordjevic, Marko; Sengupta, Anirvan M.
2006-03-01
SELEX (systematic evolution of ligands by exponential enrichment) is an experimental procedure that allows the extraction, from an initially random pool of DNA, of those oligomers with high affinity for a given DNA-binding protein. We address what is a suitable experimental and computational procedure to infer parameters of transcription factor-DNA interaction from SELEX experiments. To answer this, we use a biophysical model of transcription factor-DNA interactions to quantitatively model SELEX. We show that a standard procedure is unsuitable for obtaining accurate interaction parameters. However, we theoretically show that a modified experiment in which chemical potential is fixed through different rounds of the experiment allows robust generation of an appropriate dataset. Based on our quantitative model, we propose a novel bioinformatic method of data analysis for such a modified experiment and apply it to extract the interaction parameters for a mammalian transcription factor CTF/NFI. From a practical point of view, our method results in a significantly improved false positive/false negative trade-off, as compared to both the standard information theory based method and a widely used empirically formulated procedure.
Knapp, S J
1991-03-01
To maximize parameter estimation efficiency and statistical power and to estimate epistasis, the parameters of multiple quantitative trait loci (QTLs) must be simultaneously estimated. If multiple QTL affect a trait, then estimates of means of QTL genotypes from individual locus models are statistically biased. In this paper, I describe methods for estimating means of QTL genotypes and recombination frequencies between marker and quantitative trait loci using multilocus backcross, doubled haploid, recombinant inbred, and testcross progeny models. Expected values of marker genotype means were defined using no double or multiple crossover frequencies and flanking markers for linked and unlinked quantitative trait loci. The expected values for a particular model comprise a system of nonlinear equations that can be solved using an interative algorithm, e.g., the Gauss-Newton algorithm. The solutions are maximum likelihood estimates when the errors are normally distributed. A linear model for estimating the parameters of unlinked quantitative trait loci was found by transforming the nonlinear model. Recombination frequency estimators were defined using this linear model. Certain means of linked QTLs are less efficiently estimated than means of unlinked QTLs.
Quantitative Methods in Supply Chain Management Models and Algorithms
Christou, Ioannis T
2012-01-01
Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...
A Team Mental Model Perspective of Pre-Quantitative Risk
Cooper, Lynne P.
2011-01-01
This study was conducted to better understand how teams conceptualize risk before it can be quantified, and the processes by which a team forms a shared mental model of this pre-quantitative risk. Using an extreme case, this study analyzes seven months of team meeting transcripts, covering the entire lifetime of the team. Through an analysis of team discussions, a rich and varied structural model of risk emerges that goes significantly beyond classical representations of risk as the product of a negative consequence and a probability. In addition to those two fundamental components, the team conceptualization includes the ability to influence outcomes and probabilities, networks of goals, interaction effects, and qualitative judgments about the acceptability of risk, all affected by associated uncertainties. In moving from individual to team mental models, team members employ a number of strategies to gain group recognition of risks and to resolve or accept differences.
Quantitative risk assessment modeling for nonhomogeneous urban road tunnels.
Meng, Qiang; Qu, Xiaobo; Wang, Xinchang; Yuanita, Vivi; Wong, Siew Chee
2011-03-01
Urban road tunnels provide an increasingly cost-effective engineering solution, especially in compact cities like Singapore. For some urban road tunnels, tunnel characteristics such as tunnel configurations, geometries, provisions of tunnel electrical and mechanical systems, traffic volumes, etc. may vary from one section to another. These urban road tunnels that have characterized nonuniform parameters are referred to as nonhomogeneous urban road tunnels. In this study, a novel quantitative risk assessment (QRA) model is proposed for nonhomogeneous urban road tunnels because the existing QRA models for road tunnels are inapplicable to assess the risks in these road tunnels. This model uses a tunnel segmentation principle whereby a nonhomogeneous urban road tunnel is divided into various homogenous sections. Individual risk for road tunnel sections as well as the integrated risk indices for the entire road tunnel is defined. The article then proceeds to develop a new QRA model for each of the homogeneous sections. Compared to the existing QRA models for road tunnels, this section-based model incorporates one additional top event-toxic gases due to traffic congestion-and employs the Poisson regression method to estimate the vehicle accident frequencies of tunnel sections. This article further illustrates an aggregated QRA model for nonhomogeneous urban tunnels by integrating the section-based QRA models. Finally, a case study in Singapore is carried out.
Quantitative modeling of a gene's expression from its intergenic sequence.
Directory of Open Access Journals (Sweden)
Md Abul Hassan Samee
2014-03-01
Full Text Available Modeling a gene's expression from its intergenic locus and trans-regulatory context is a fundamental goal in computational biology. Owing to the distributed nature of cis-regulatory information and the poorly understood mechanisms that integrate such information, gene locus modeling is a more challenging task than modeling individual enhancers. Here we report the first quantitative model of a gene's expression pattern as a function of its locus. We model the expression readout of a locus in two tiers: 1 combinatorial regulation by transcription factors bound to each enhancer is predicted by a thermodynamics-based model and 2 independent contributions from multiple enhancers are linearly combined to fit the gene expression pattern. The model does not require any prior knowledge about enhancers contributing toward a gene's expression. We demonstrate that the model captures the complex multi-domain expression patterns of anterior-posterior patterning genes in the early Drosophila embryo. Altogether, we model the expression patterns of 27 genes; these include several gap genes, pair-rule genes, and anterior, posterior, trunk, and terminal genes. We find that the model-selected enhancers for each gene overlap strongly with its experimentally characterized enhancers. Our findings also suggest the presence of sequence-segments in the locus that would contribute ectopic expression patterns and hence were "shut down" by the model. We applied our model to identify the transcription factors responsible for forming the stripe boundaries of the studied genes. The resulting network of regulatory interactions exhibits a high level of agreement with known regulatory influences on the target genes. Finally, we analyzed whether and why our assumption of enhancer independence was necessary for the genes we studied. We found a deterioration of expression when binding sites in one enhancer were allowed to influence the readout of another enhancer. Thus, interference
Defining Scenarios: Linking Integrated Models, Regional Concerns, and Stakeholders
Hartmann, H. C.; Stewart, S.; Liu, Y.; Mahmoud, M.
2007-05-01
Scenarios are important tools for long-term planning, and there is great interest in using integrated models in scenario studies. However, scenario definition and assessment are creative, as well as scientific, efforts. Using facilitated creative processes, we have worked with stakeholders to define regionally significant scenarios that encompass a broad range of hydroclimatic, socioeconomic, and institutional dimensions. The regional scenarios subsequently inform the definition of local scenarios that work with context-specific integrated models that, individually, can address only a subset of overall regional complexity. Based on concerns of stakeholders in the semi-arid US Southwest, we prioritized three dimensions that are especially important, yet highly uncertain, for long-term planning: hydroclimatic conditions (increased variability, persistent drought), development patterns (urban consolidation, distributed rural development), and the nature of public institutions (stressed, proactive). Linking across real-world decision contexts and integrated modeling efforts poses challenges of creatively connecting the conceptual models held by both the research and stakeholder communities.
Modeling Error in Quantitative Macro-Comparative Research
Directory of Open Access Journals (Sweden)
Salvatore J. Babones
2015-08-01
Full Text Available Much quantitative macro-comparative research (QMCR relies on a common set of published data sources to answer similar research questions using a limited number of statistical tools. Since all researchers have access to much the same data, one might expect quick convergence of opinion on most topics. In reality, of course, differences of opinion abound and persist. Many of these differences can be traced, implicitly or explicitly, to the different ways researchers choose to model error in their analyses. Much careful attention has been paid in the political science literature to the error structures characteristic of time series cross-sectional (TSCE data, but much less attention has been paid to the modeling of error in broadly cross-national research involving large panels of countries observed at limited numbers of time points. Here, and especially in the sociology literature, multilevel modeling has become a hegemonic but often poorly understood research tool. I argue that widely-used types of multilevel models, commonly known as fixed effects models (FEMs and random effects models (REMs, can produce wildly spurious results when applied to trended data due to mis-specification of error. I suggest that in most commonly-encountered scenarios, difference models are more appropriate for use in QMC.
A quantitative model for integrating landscape evolution and soil formation
Vanwalleghem, T.; Stockmann, U.; Minasny, B.; McBratney, Alex B.
2013-06-01
evolution is closely related to soil formation. Quantitative modeling of the dynamics of soils and landscapes should therefore be integrated. This paper presents a model, named Model for Integrated Landscape Evolution and Soil Development (MILESD), which describes the interaction between pedogenetic and geomorphic processes. This mechanistic model includes the most significant soil formation processes, ranging from weathering to clay translocation, and combines these with the lateral redistribution of soil particles through erosion and deposition. The model is spatially explicit and simulates the vertical variation in soil horizon depth as well as basic soil properties such as texture and organic matter content. In addition, sediment export and its properties are recorded. This model is applied to a 6.25 km2 area in the Werrikimbe National Park, Australia, simulating soil development over a period of 60,000 years. Comparison with field observations shows how the model accurately predicts trends in total soil thickness along a catena. Soil texture and bulk density are predicted reasonably well, with errors of the order of 10%, however, field observations show a much higher organic carbon content than predicted. At the landscape scale, different scenarios with varying erosion intensity result only in small changes of landscape-averaged soil thickness, while the response of the total organic carbon stored in the system is higher. Rates of sediment export show a highly nonlinear response to soil development stage and the presence of a threshold, corresponding to the depletion of the soil reservoir, beyond which sediment export drops significantly.
Modeling Cancer Metastasis using Global, Quantitative and Integrative Network Biology
DEFF Research Database (Denmark)
Schoof, Erwin; Erler, Janine
phosphorylation dynamics in a given biological sample. In Chapter III, we move into Integrative Network Biology, where, by combining two fundamental technologies (MS & NGS), we can obtain more in-depth insights into the links between cellular phenotype and genotype. Article 4 describes the proof...... cancer networks using Network Biology. Technologies key to this, such as Mass Spectrometry (MS), Next-Generation Sequencing (NGS) and High-Content Screening (HCS) are briefly described. In Chapter II, we cover how signaling networks and mutational data can be modeled in order to gain a better...
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction
Directory of Open Access Journals (Sweden)
Cobbs Gary
2012-08-01
Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the
Modeling Human Blockers in Millimeter Wave Radio Links
Institute of Scientific and Technical Information of China (English)
Jonathan S. Lu; Daniel Steinbach; Patrick Cabrol; Philip Pietraski
2012-01-01
In this paper, we investigate the loss caused by multiple humans blocking millimeter wave frequencies. We model human blockers as absorbing screens of infinite height with two knife-edges, We take a physical optics approach to computing the diffraction around the absorbing screens, This approach differs to the geometric optics approach described in much of the literature. The blocking model is validated by measuring the gain from multiple-human blocking configurations on an indoor link. The blocking gains predicted using Piazzi ' s numerical integration method (a physical optics method) agree well with measurements taken from approximately 2.7 dB to -50 dB. Thereofre, this model is suitable for real human blockers, The mean prediction error for the method is approximately -1.2 dB, and the standard deviation is approximately 5 dB.
Directory of Open Access Journals (Sweden)
D. N. Huntzinger
2010-10-01
Full Text Available Given the large differences between biospheric model estimates of regional carbon exchange, there is a need to understand and reconcile the predicted spatial variability of fluxes across models. This paper presents a set of quantitative tools that can be applied for comparing flux estimates in light of the inherent differences in model formulation. The presented methods include variogram analysis, variable selection, and geostatistical regression. These methods are evaluated in terms of their ability to assess and identify differences in spatial variability in flux estimates across North America among a small subset of models, as well as differences in the environmental drivers that appear to have the greatest control over the spatial variability of predicted fluxes. The examined models are the Simple Biosphere (SiB 3.0, Carnegie Ames Stanford Approach (CASA, and CASA coupled with the Global Fire Emissions Database (CASA GFEDv2, and the analyses are performed on model-predicted net ecosystem exchange, gross primary production, and ecosystem respiration. Variogram analysis reveals consistent seasonal differences in spatial variability among modeled fluxes at a 1°×1° spatial resolution. However, significant differences are observed in the overall magnitude of the carbon flux spatial variability across models, in both net ecosystem exchange and component fluxes. Results of the variable selection and geostatistical regression analyses suggest fundamental differences between the models in terms of the factors that control the spatial variability of predicted flux. For example, carbon flux is more strongly correlated with percent land cover in CASA GFEDv2 than in SiB or CASA. Some of these factors can be linked back to model formulation, and would have been difficult to identify simply by comparing net fluxes between models. Overall, the quantitative approach presented here provides a set of tools for comparing predicted grid-scale fluxes across
Modeling logistic performance in quantitative microbial risk assessment.
Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke
2010-01-01
In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.
Fusing Quantitative Requirements Analysis with Model-based Systems Engineering
Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven
2006-01-01
A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.
Quantitative model studies for interfaces in organic electronic devices
Gottfried, J. Michael
2016-11-01
In organic light-emitting diodes and similar devices, organic semiconductors are typically contacted by metal electrodes. Because the resulting metal/organic interfaces have a large impact on the performance of these devices, their quantitative understanding is indispensable for the further rational development of organic electronics. A study by Kröger et al (2016 New J. Phys. 18 113022) of an important single-crystal based model interface provides detailed insight into its geometric and electronic structure and delivers valuable benchmark data for computational studies. In view of the differences between typical surface-science model systems and real devices, a ‘materials gap’ is identified that needs to be addressed by future research to make the knowledge obtained from fundamental studies even more beneficial for real-world applications.
Quantitative identification of technological discontinuities using simulation modeling
Park, Hyunseok
2016-01-01
The aim of this paper is to develop and test metrics to quantitatively identify technological discontinuities in a knowledge network. We developed five metrics based on innovation theories and tested the metrics by a simulation model-based knowledge network and hypothetically designed discontinuity. The designed discontinuity is modeled as a node which combines two different knowledge streams and whose knowledge is dominantly persistent in the knowledge network. The performances of the proposed metrics were evaluated by how well the metrics can distinguish the designed discontinuity from other nodes on the knowledge network. The simulation results show that the persistence times # of converging main paths provides the best performance in identifying the designed discontinuity: the designed discontinuity was identified as one of the top 3 patents with 96~99% probability by Metric 5 and it is, according to the size of a domain, 12~34% better than the performance of the second best metric. Beyond the simulation ...
Fusing Quantitative Requirements Analysis with Model-based Systems Engineering
Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven
2006-01-01
A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.
A Quantitative Theory Model of a Photobleaching Mechanism
Institute of Scientific and Technical Information of China (English)
陈同生; 曾绍群; 周炜; 骆清铭
2003-01-01
A photobleaching model:D-P(dye-photon interaction)and D-O(Dye-oxygen oxidative reaction)photobleaching theory,is proposed.The quantitative power dependences of photobleaching rates with both one-and two-photon excitations(1 PE and TPE)are obtained.This photobleaching model can be used to elucidate our and other experimental results commendably.Experimental studies of the photobleaching rates for rhodamine B with TPE under unsaturation conditions reveals that the power dependences of photobleaching rates increase with the increasing dye concentration,and that the photobleaching rate of a single molecule increases in the second power of the excitation intensity,which is different from the high-order(＞ 3)nonlinear dependence of ensemble molecules.
Allan, Andrew; Barbour, Emily; Salehin, Mashfiqus; Munsur Rahman, Md.; Hutton, Craig; Lazar, Attila
2016-04-01
A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.
Allan, A.; Barbour, E.; Salehin, M.; Hutton, C.; Lázár, A. N.; Nicholls, R. J.; Rahman, M. M.
2015-12-01
A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.
An infinitesimal model for quantitative trait genomic value prediction.
Directory of Open Access Journals (Sweden)
Zhiqiu Hu
Full Text Available We developed a marker based infinitesimal model for quantitative trait analysis. In contrast to the classical infinitesimal model, we now have new information about the segregation of every individual locus of the entire genome. Under this new model, we propose that the genetic effect of an individual locus is a function of the genome location (a continuous quantity. The overall genetic value of an individual is the weighted integral of the genetic effect function along the genome. Numerical integration is performed to find the integral, which requires partitioning the entire genome into a finite number of bins. Each bin may contain many markers. The integral is approximated by the weighted sum of all the bin effects. We now turn the problem of marker analysis into bin analysis so that the model dimension has decreased from a virtual infinity to a finite number of bins. This new approach can efficiently handle virtually unlimited number of markers without marker selection. The marker based infinitesimal model requires high linkage disequilibrium of all markers within a bin. For populations with low or no linkage disequilibrium, we develop an adaptive infinitesimal model. Both the original and the adaptive models are tested using simulated data as well as beef cattle data. The simulated data analysis shows that there is always an optimal number of bins at which the predictability of the bin model is much greater than the original marker analysis. Result of the beef cattle data analysis indicates that the bin model can increase the predictability from 10% (multiple marker analysis to 33% (multiple bin analysis. The marker based infinitesimal model paves a way towards the solution of genetic mapping and genomic selection using the whole genome sequence data.
Quantitative modeling of the ionospheric response to geomagnetic activity
Directory of Open Access Journals (Sweden)
T. J. Fuller-Rowell
Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard
Automated quantitative gait analysis in animal models of movement disorders
Directory of Open Access Journals (Sweden)
Vandeputte Caroline
2010-08-01
Full Text Available Abstract Background Accurate and reproducible behavioral tests in animal models are of major importance in the development and evaluation of new therapies for central nervous system disease. In this study we investigated for the first time gait parameters of rat models for Parkinson's disease (PD, Huntington's disease (HD and stroke using the Catwalk method, a novel automated gait analysis test. Static and dynamic gait parameters were measured in all animal models, and these data were compared to readouts of established behavioral tests, such as the cylinder test in the PD and stroke rats and the rotarod tests for the HD group. Results Hemiparkinsonian rats were generated by unilateral injection of the neurotoxin 6-hydroxydopamine in the striatum or in the medial forebrain bundle. For Huntington's disease, a transgenic rat model expressing a truncated huntingtin fragment with multiple CAG repeats was used. Thirdly, a stroke model was generated by a photothrombotic induced infarct in the right sensorimotor cortex. We found that multiple gait parameters were significantly altered in all three disease models compared to their respective controls. Behavioural deficits could be efficiently measured using the cylinder test in the PD and stroke animals, and in the case of the PD model, the deficits in gait essentially confirmed results obtained by the cylinder test. However, in the HD model and the stroke model the Catwalk analysis proved more sensitive than the rotarod test and also added new and more detailed information on specific gait parameters. Conclusion The automated quantitative gait analysis test may be a useful tool to study both motor impairment and recovery associated with various neurological motor disorders.
A quantitative real-time PCR method using an X-linked gene for sex typing in pigs.
Ballester, Maria; Castelló, Anna; Ramayo-Caldas, Yuliaxis; Folch, Josep M
2013-06-01
At present, a wide range of molecular sex-typing protocols in wild and domestic animals are available. In pigs, most of these methods are based on PCR amplification of X-Y homologous genes followed by gel electrophoresis which is time-consuming and in some cases expensive. In this paper, we describe, for the first time, a SYBR green-based quantitative real-time PCR (qPCR) assay using an X-linked gene, the glycoprotein M6B, for genetic sexing of pigs. Taking into account the differences in the glycoprotein M6B gene copy number between genders, we determine the correct sex of 54 pig samples from either diaphragm or hair follicle from different breeds using the 2(-ΔΔCT) method for relative quantification. Our qPCR assay represents a quick, inexpensive, and reliable tool for sex determination in pigs. This new protocol could be easily adapted to other species in which the sex determination was required.
Roosa, Stéphanie; Wauven, Corinne Vander; Billon, Gabriel; Matthijs, Sandra; Wattiez, Ruddy; Gillan, David C
2014-10-01
Pseudomonas bacteria are ubiquitous Gram-negative and aerobic microorganisms that are known to harbor metal resistance mechanisms such as efflux pumps and intracellular redox enzymes. Specific Pseudomonas bacteria have been quantified in some metal-contaminated environments, but the entire Pseudomonas population has been poorly investigated under these conditions, and the link with metal bioavailability was not previously examined. In the present study, quantitative PCR and cell cultivation were used to monitor and characterize the Pseudomonas population at 4 different sediment sites contaminated with various levels of metals. At the same time, total metals and metal bioavailability (as estimated using an HCl 1 m extraction) were measured. It was found that the total level of Pseudomonas, as determined by qPCR using two different genes (oprI and the 16S rRNA gene), was positively and significantly correlated with total and HCl-extractable Cu, Co, Ni, Pb and Zn, with high correlation coefficients (>0.8). Metal-contaminated sediments featured isolates of the Pseudomonas putida, Pseudomonas fluorescens, Pseudomonas lutea and Pseudomonas aeruginosa groups, with other bacterial genera such as Mycobacterium, Klebsiella and Methylobacterium. It is concluded that Pseudomonas bacteria do proliferate in metal-contaminated sediments, but are still part of a complex community.
Quantitative model of the growth of floodplains by vertical accretion
Moody, J.A.; Troutman, B.M.
2000-01-01
A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.
Quantitative Model for Estimating Soil Erosion Rates Using 137Cs
Institute of Scientific and Technical Information of China (English)
YANGHAO; GHANGQING; 等
1998-01-01
A quantitative model was developed to relate the amount of 137Cs loss from the soil profile to the rate of soil erosion,According th mass balance model,the depth distribution pattern of 137Cs in the soil profile ,the radioactive decay of 137Cs,sampling year and the difference of 137Cs fallout amount among years were taken into consideration.By introducing typical depth distribution functions of 137Cs into the model ,detailed equations for the model were got for different soil,The model shows that the rate of soil erosion is mainly controlled by the depth distrbution pattern of 137Cs ,the year of sampling,and the percentage reduction in total 137Cs,The relationship between the rate of soil loss and 137Cs depletion i neither linear nor logarithmic,The depth distribution pattern of 137Cs is a major factor for estimating the rate of soil loss,Soil erosion rate is directly related with the fraction of 137Cs content near the soil surface. The influences of the radioactive decay of 137Cs,sampling year and 137Cs input fraction are not large compared with others.
Model of the UIC link suspension for freight wagons
Energy Technology Data Exchange (ETDEWEB)
Piotrowski, J. [Warsaw Univ. of Technology, Inst. of Vehicles, ul. Narbutta Warszawa (Poland)
2003-12-01
The paper presents a model of the UIC link suspension for freight wagons with emphasis on its longitudinal and lateral characteristics, which influence the lateral dynamics of the vehicle. The functioning of the suspension in the horizontal plane is realised by a number of technical (pivoted) pendulums composing linkages. The main feature of the joints of linkages is internal rolling/sliding in the presence of dry friction. The dissipation of energy by dry friction in the joints is the only source of damping, which influences the lateral dynamics of the vehicle. After detailed modelling of the technical pendulum, phenomenological models of the suspension are built, which reproduce the characteristics of the suspension using simple elements. A three-parameter model with one dry-friction slider and two linear springs reproduces the lateral characteristic of the suspension. A nine-parameter model with four dry-friction sliders and five springs reproduces the longitudinal characteristic. The models, using a method of non-smooth mechanics, may be directly implemented to vehicle/track dynamic simulations. (orig.)
Goal relevance as a quantitative model of human task relevance.
Tanner, James; Itti, Laurent
2017-03-01
The concept of relevance is used ubiquitously in everyday life. However, a general quantitative definition of relevance has been lacking, especially as pertains to quantifying the relevance of sensory observations to one's goals. We propose a theoretical definition for the information value of data observations with respect to a goal, which we call "goal relevance." We consider the probability distribution of an agent's subjective beliefs over how a goal can be achieved. When new data are observed, its goal relevance is measured as the Kullback-Leibler divergence between belief distributions before and after the observation. Theoretical predictions about the relevance of different obstacles in simulated environments agreed with the majority response of 38 human participants in 83.5% of trials, beating multiple machine-learning models. Our new definition of goal relevance is general, quantitative, explicit, and allows one to put a number onto the previously elusive notion of relevance of observations to a goal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Virtual Models Linked with Physical Components in Construction
DEFF Research Database (Denmark)
Sørensen, Kristian Birch
The use of virtual models supports a fundamental change in the working practice of the construction industry. It changes the primary information carrier (drawings) from simple manually created depictions of the building under construction to visually realistic digital representations that also...... components in the construction process and thereby improving the information handling. The present PhD project has examined the potential of establishing such a digital link between virtual models and physical components in construction. This is done by integrating knowledge of civil engineering, software...... engineering and business development in an iterative and user needs centred system development process. The analysis of future business perspectives presents an extensive number of new working processes that can assist in solving major challenges in the construction industry. Three of the most promising...
DEFF Research Database (Denmark)
Oviedo, J M; Valiño, F; Plasencia, I
2001-01-01
We have developed an enzyme-linked immunosorbent assay (ELISA) that uses polyclonal or monoclonal anti-surfactant protein SP-B antibodies to quantitate purified SP-B in chloroform/methanol and in chloroform/methanol extracts of whole pulmonary surfactant at nanogram levels. This method has been...
Stockley, E W; Cole, H M; Brown, A D; Wheal, H V
1993-04-01
A system for accurately reconstructing neurones from optical sections taken at high magnification is described. Cells are digitised on a 68000-based microcomputer to form a database consisting of a series of linked nodes each consisting of x, y, z coordinates and an estimate of dendritic diameter. This database is used to generate three-dimensional (3-D) displays of the neurone and allows quantitative analysis of the cell volume, surface area and dendritic length. Images of the cell can be manipulated locally or transferred to an IBM 3090 mainframe where a wireframe model can be displayed on an IBM 5080 graphics terminal and rotated interactively in real time, allowing visualisation of the cell from all angles. Space-filling models can also be produced. Reconstructions can also provide morphological data for passive electrical simulations of hippocampal pyramidal cells.
A Quantitative Model to Estimate Drug Resistance in Pathogens
Directory of Open Access Journals (Sweden)
Frazier N. Baker
2016-12-01
Full Text Available Pneumocystis pneumonia (PCP is an opportunistic infection that occurs in humans and other mammals with debilitated immune systems. These infections are caused by fungi in the genus Pneumocystis, which are not susceptible to standard antifungal agents. Despite decades of research and drug development, the primary treatment and prophylaxis for PCP remains a combination of trimethoprim (TMP and sulfamethoxazole (SMX that targets two enzymes in folic acid biosynthesis, dihydrofolate reductase (DHFR and dihydropteroate synthase (DHPS, respectively. There is growing evidence of emerging resistance by Pneumocystis jirovecii (the species that infects humans to TMP-SMX associated with mutations in the targeted enzymes. In the present study, we report the development of an accurate quantitative model to predict changes in the binding affinity of inhibitors (Ki, IC50 to the mutated proteins. The model is based on evolutionary information and amino acid covariance analysis. Predicted changes in binding affinity upon mutations highly correlate with the experimentally measured data. While trained on Pneumocystis jirovecii DHFR/TMP data, the model shows similar or better performance when evaluated on the resistance data for a different inhibitor of PjDFHR, another drug/target pair (PjDHPS/SMX and another organism (Staphylococcus aureus DHFR/TMP. Therefore, we anticipate that the developed prediction model will be useful in the evaluation of possible resistance of the newly sequenced variants of the pathogen and can be extended to other drug targets and organisms.
Quantitative Modeling of Human-Environment Interactions in Preindustrial Time
Sommer, Philipp S.; Kaplan, Jed O.
2017-04-01
Quantifying human-environment interactions and anthropogenic influences on the environment prior to the Industrial revolution is essential for understanding the current state of the earth system. This is particularly true for the terrestrial biosphere, but marine ecosystems and even climate were likely modified by human activities centuries to millennia ago. Direct observations are however very sparse in space and time, especially as one considers prehistory. Numerical models are therefore essential to produce a continuous picture of human-environment interactions in the past. Agent-based approaches, while widely applied to quantifying human influence on the environment in localized studies, are unsuitable for global spatial domains and Holocene timescales because of computational demands and large parameter uncertainty. Here we outline a new paradigm for the quantitative modeling of human-environment interactions in preindustrial time that is adapted to the global Holocene. Rather than attempting to simulate agency directly, the model is informed by a suite of characteristics describing those things about society that cannot be predicted on the basis of environment, e.g., diet, presence of agriculture, or range of animals exploited. These categorical data are combined with the properties of the physical environment in coupled human-environment model. The model is, at its core, a dynamic global vegetation model with a module for simulating crop growth that is adapted for preindustrial agriculture. This allows us to simulate yield and calories for feeding both humans and their domesticated animals. We couple this basic caloric availability with a simple demographic model to calculate potential population, and, constrained by labor requirements and land limitations, we create scenarios of land use and land cover on a moderate-resolution grid. We further implement a feedback loop where anthropogenic activities lead to changes in the properties of the physical
Integrative modelling reveals mechanisms linking productivity and plant species richness.
Grace, James B; Anderson, T Michael; Seabloom, Eric W; Borer, Elizabeth T; Adler, Peter B; Harpole, W Stanley; Hautier, Yann; Hillebrand, Helmut; Lind, Eric M; Pärtel, Meelis; Bakker, Jonathan D; Buckley, Yvonne M; Crawley, Michael J; Damschen, Ellen I; Davies, Kendi F; Fay, Philip A; Firn, Jennifer; Gruner, Daniel S; Hector, Andy; Knops, Johannes M H; MacDougall, Andrew S; Melbourne, Brett A; Morgan, John W; Orrock, John L; Prober, Suzanne M; Smith, Melinda D
2016-01-21
How ecosystem productivity and species richness are interrelated is one of the most debated subjects in the history of ecology. Decades of intensive study have yet to discern the actual mechanisms behind observed global patterns. Here, by integrating the predictions from multiple theories into a single model and using data from 1,126 grassland plots spanning five continents, we detect the clear signals of numerous underlying mechanisms linking productivity and richness. We find that an integrative model has substantially higher explanatory power than traditional bivariate analyses. In addition, the specific results unveil several surprising findings that conflict with classical models. These include the isolation of a strong and consistent enhancement of productivity by richness, an effect in striking contrast with superficial data patterns. Also revealed is a consistent importance of competition across the full range of productivity values, in direct conflict with some (but not all) proposed models. The promotion of local richness by macroecological gradients in climatic favourability, generally seen as a competing hypothesis, is also found to be important in our analysis. The results demonstrate that an integrative modelling approach leads to a major advance in our ability to discern the underlying processes operating in ecological systems.
A simple model linking galaxy and dark matter evolution
Energy Technology Data Exchange (ETDEWEB)
Birrer, Simon; Lilly, Simon; Amara, Adam; Paranjape, Aseem; Refregier, Alexandre, E-mail: simon.birrer@phys.ethz.ch, E-mail: simon.lilly@phys.ethz.ch [Institute for Astronomy, Department of Physics, ETH Zurich, Wolfgang-Pauli-Strasse 27, 8093 Zurich (Switzerland)
2014-09-20
We construct a simple phenomenological model for the evolving galaxy population by incorporating predefined baryonic prescriptions into a dark matter hierarchical merger tree. The model is based on the simple gas-regulator model introduced by Lilly et al., coupled with the empirical quenching rules of Peng et al. The simplest model already does quite well in reproducing, without re-adjusting the input parameters, many observables, including the main sequence sSFR-mass relation, the faint end slope of the galaxy mass function, and the shape of the star forming and passive mass functions. Similar to observations and/or the recent phenomenological model of Behroozi et al., which was based on epoch-dependent abundance-matching, our model also qualitatively reproduces the evolution of the main sequence sSFR(z) and SFRD(z) star formation rate density relations, the M{sub s} – M{sub h} stellar-to-halo mass relation, and the SFR – M{sub h} relation. Quantitatively the evolution of sSFR(z) and SFRD(z) is not steep enough, the M{sub s} – M{sub h} relation is not quite peaked enough, and, surprisingly, the ratio of quenched to star forming galaxies around M* is not quite high enough. We show that these deficiencies can simultaneously be solved by ad hoc allowing galaxies to re-ingest some of the gas previously expelled in winds, provided that this is done in a mass-dependent and epoch-dependent way. These allow the model galaxies to reduce an inherent tendency to saturate their star formation efficiency, which emphasizes how efficient galaxies around M* are in converting baryons into stars and highlights the fact that quenching occurs at the point when galaxies are rapidly approaching the maximum possible efficiency of converting baryons into stars.
Mechanics of neutrophil phagocytosis: experiments and quantitative models.
Herant, Marc; Heinrich, Volkmar; Dembo, Micah
2006-05-01
To quantitatively characterize the mechanical processes that drive phagocytosis, we observed the FcgammaR-driven engulfment of antibody-coated beads of diameters 3 mum to 11 mum by initially spherical neutrophils. In particular, the time course of cell morphology, of bead motion and of cortical tension were determined. Here, we introduce a number of mechanistic models for phagocytosis and test their validity by comparing the experimental data with finite element computations for multiple bead sizes. We find that the optimal models involve two key mechanical interactions: a repulsion or pressure between cytoskeleton and free membrane that drives protrusion, and an attraction between cytoskeleton and membrane newly adherent to the bead that flattens the cell into a thin lamella. Other models such as cytoskeletal expansion or swelling appear to be ruled out as main drivers of phagocytosis because of the characteristics of bead motion during engulfment. We finally show that the protrusive force necessary for the engulfment of large beads points towards storage of strain energy in the cytoskeleton over a large distance from the leading edge ( approximately 0.5 microm), and that the flattening force can plausibly be generated by the known concentrations of unconventional myosins at the leading edge.
The Linked Dual Representation model of vocal perception and production
Directory of Open Access Journals (Sweden)
Sean eHutchins
2013-11-01
Full Text Available The voice is one of the most important media for communication, yet there is a wide range of abilities in both the perception and production of the voice. In this article, we review this range of abilities, focusing on pitch accuracy as a particularly informative case, and look at the factors underlying these abilities. Several classes of models have been posited describing the relationship between vocal perception and production, and we review the evidence for and against each class of model. We look at how the voice is different from other musical instruments and review evidence about both the association and the dissociation between vocal perception and production abilities. Finally, we introduce the Linked Dual Representation model, a new approach which can account for the broad patterns in prior findings, including trends in the data which might seem to be countervailing. We discuss how this model interacts with higher-order cognition and examine its predictions about several aspects of vocal perception and production.
Quantitative Modeling of the Alternative Pathway of the Complement System.
Zewde, Nehemiah; Gorham, Ronald D; Dorado, Angel; Morikis, Dimitrios
2016-01-01
The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection.
Gnitetskaya, Tatyana; Ivanova, Elena
2016-08-01
An application of the graph model of inter-subject links to University courses of Physics and Chemistry is presented in this article. A part of inter-subject space with directions of inter-subject links from Physics to Chemistry in the group of physical concepts has been shown. The graph model of inter-subject links includes quantitative indicators. Its numerical values are given in the article. The degree of connectedness between the data of Physics and Chemistry courses is discussed for the courses considered. The effect of the courses placement within a curriculum on the value of their connectedness is shown. The placement of courses within a curriculum can provide the study of the courses at the same time or consecutive study, when one course precedes another.
A quantitative model of technological catch-up
Directory of Open Access Journals (Sweden)
Hossein Gholizadeh
2015-02-01
Full Text Available This article presents a quantitative model for the analysis of technological gap. The rates of development of technological leaders and followers in nanotechnology are expressed in terms of coupled equations. On the basis of this model (first step comparative technological gap and rate of that will be studied. We can calculate the dynamics of the gap between leader and follower. In the Second step, we estimate the technology gap using the metafrontier approach. Then we test the relationship between the technology gap and the quality of dimensions of the Catch-up technology which were identified in previous step. The usefulness of this approach is then demonstrated in the analysis of the technological gap of nanotechnology in Iran, the leader in Middle East and the world. We shall present the behaviors of the technological leader and followers. At the end, analyzing Iran position will be identified and implying effective dimension of catch-up Suggestions will be offered which could be a fundamental for long-term policies of Iran.
A quantitative model for assessing community dynamics of pleistocene mammals.
Lyons, S Kathleen
2005-06-01
Previous studies have suggested that species responded individualistically to the climate change of the last glaciation, expanding and contracting their ranges independently. Consequently, many researchers have concluded that community composition is plastic over time. Here I quantitatively assess changes in community composition over broad timescales and assess the effect of range shifts on community composition. Data on Pleistocene mammal assemblages from the FAUNMAP database were divided into four time periods (preglacial, full glacial, postglacial, and modern). Simulation analyses were designed to determine whether the degree of change in community composition is consistent with independent range shifts, given the distribution of range shifts observed. Results indicate that many of the communities examined in the United States were more similar through time than expected if individual range shifts were completely independent. However, in each time transition examined, there were areas of nonanalogue communities. I conducted sensitivity analyses to explore how the results were affected by the assumptions of the null model. Conclusions about changes in mammalian distributions and community composition are robust with respect to the assumptions of the model. Thus, whether because of biotic interactions or because of common environmental requirements, community structure through time is more complex than previously thought.
Linking Geomechanical Models with Observations of Microseismicity during CCS Operations
Verdon, J.; Kendall, J.; White, D.
2012-12-01
During CO2 injection for the purposes of carbon capture and storage (CCS), injection-induced fracturing of the overburden represents a key risk to storage integrity. Fractures in a caprock provide a pathway along which buoyant CO2 can rise and escape the storage zone. Therefore the ability to link field-scale geomechanical models with field geophysical observations is of paramount importance to guarantee secure CO2 storage. Accurate location of microseismic events identifies where brittle failure has occurred on fracture planes. This is a manifestation of the deformation induced by CO2 injection. As the pore pressure is increased during injection, effective stress is decreased, leading to inflation of the reservoir and deformation of surrounding rocks, which creates microseismicity. The deformation induced by injection can be simulated using finite-element mechanical models. Such a model can be used to predict when and where microseismicity is expected to occur. However, typical elements in a field scale mechanical models have decameter scales, while the rupture size for microseismic events are typically of the order of 1 square meter. This means that mapping modeled stress changes to predictions of microseismic activity can be challenging. Where larger scale faults have been identified, they can be included explicitly in the geomechanical model. Where movement is simulated along these discrete features, it can be assumed that microseismicity will occur. However, microseismic events typically occur on fracture networks that are too small to be simulated explicitly in a field-scale model. Therefore, the likelihood of microseismicity occurring must be estimated within a finite element that does not contain explicitly modeled discontinuities. This can be done in a number of ways, including the utilization of measures such as closeness on the stress state to predetermined failure criteria, either for planes with a defined orientation (the Mohr-Coulomb criteria) for
Quantitative comparisons of analogue models of brittle wedge dynamics
Schreurs, Guido
2010-05-01
Analogue model experiments are widely used to gain insights into the evolution of geological structures. In this study, we present a direct comparison of experimental results of 14 analogue modelling laboratories using prescribed set-ups. A quantitative analysis of the results will document the variability among models and will allow an appraisal of reproducibility and limits of interpretation. This has direct implications for comparisons between structures in analogue models and natural field examples. All laboratories used the same frictional analogue materials (quartz and corundum sand) and prescribed model-building techniques (sieving and levelling). Although each laboratory used its own experimental apparatus, the same type of self-adhesive foil was used to cover the base and all the walls of the experimental apparatus in order to guarantee identical boundary conditions (i.e. identical shear stresses at the base and walls). Three experimental set-ups using only brittle frictional materials were examined. In each of the three set-ups the model was shortened by a vertical wall, which moved with respect to the fixed base and the three remaining sidewalls. The minimum width of the model (dimension parallel to mobile wall) was also prescribed. In the first experimental set-up, a quartz sand wedge with a surface slope of ˜20° was pushed by a mobile wall. All models conformed to the critical taper theory, maintained a stable surface slope and did not show internal deformation. In the next two experimental set-ups, a horizontal sand pack consisting of alternating quartz sand and corundum sand layers was shortened from one side by the mobile wall. In one of the set-ups a thin rigid sheet covered part of the model base and was attached to the mobile wall (i.e. a basal velocity discontinuity distant from the mobile wall). In the other set-up a basal rigid sheet was absent and the basal velocity discontinuity was located at the mobile wall. In both types of experiments
Quantitative property-structural relation modeling on polymeric dielectric materials
Wu, Ke
Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix
Mustonen, Ville; Kinney, Justin; Callan, Curtis G; Lässig, Michael
2008-08-26
We present a genomewide cross-species analysis of regulation for broad-acting transcription factors in yeast. Our model for binding site evolution is founded on biophysics: the binding energy between transcription factor and site is a quantitative phenotype of regulatory function, and selection is given by a fitness landscape that depends on this phenotype. The model quantifies conservation, as well as loss and gain, of functional binding sites in a coherent way. Its predictions are supported by direct cross-species comparison between four yeast species. We find ubiquitous compensatory mutations within functional sites, such that the energy phenotype and the function of a site evolve in a significantly more constrained way than does its sequence. We also find evidence for substantial evolution of regulatory function involving point mutations as well as sequence insertions and deletions within binding sites. Genes lose their regulatory link to a given transcription factor at a rate similar to the neutral point mutation rate, from which we infer a moderate average fitness advantage of functional over nonfunctional sites. In a wider context, this study provides an example of inference of selection acting on a quantitative molecular trait.
Implementation of a vibrationally linked chemical reaction model for DSMC
Carlson, A. B.; Bird, Graeme A.
1994-01-01
A new procedure closely linking dissociation and exchange reactions in air to the vibrational levels of the diatomic molecules has been implemented in both one- and two-dimensional versions of Direct Simulation Monte Carlo (DSMC) programs. The previous modeling of chemical reactions with DSMC was based on the continuum reaction rates for the various possible reactions. The new method is more closely related to the actual physics of dissociation and is more appropriate to the particle nature of DSMC. Two cases are presented: the relaxation to equilibrium of undissociated air initially at 10,000 K, and the axisymmetric calculation of shuttle forebody heating during reentry at 92.35 km and 7500 m/s. Although reaction rates are not used in determining the dissociations or exchange reactions, the new method produces rates which agree astonishingly well with the published rates derived from experiment. The results for gas properties and surface properties also agree well with the results produced by earlier DSMC models, equilibrium air calculations, and experiment.
Quantitative Model for Supply Chain Visibility: Process Capability Perspective
Directory of Open Access Journals (Sweden)
Youngsu Lee
2016-01-01
Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.
DEFF Research Database (Denmark)
Sørensen, Paul Haase; Baungaard, Jens Rane
1996-01-01
A general model for a rotating homogenous flexible robot link is developed. The model describes two-dimensional transverse vibrations induced by the actuator due to misalignment of the actuator axis of rotation relative to the link symmetry axis and due to translational acceleration of the link...
FSO and radio link attenuation: meteorological models verified by experiment
Brazda, Vladimir; Fiser, Ondrej; Svoboda, Jaroslav
2011-09-01
Institute of Atmospheric Physics of Czech Academy measures atmospheric attenuation on 60 m experimental FSO link on 830 and 1550 nm for more than three years. Visibility sensors and two 3D sonic anemometers on both transmitting and receiving site, rain gauge and many sensors enabling the refractivity index computation are spaced along the optical link. Meteorological visibility, wind turbulent energy, sonic temperature, structure index and rain rate are correlated with measured attenuation. FSO link attenuation dependence on the above mentioned parameters is analyzed. The paper shows also basic statistical behavior of the long-term FSO signal level and also the simulation of hybrid link techniques.
Linking nutrient loading and oxygen in the coastal ocean: A new global scale model
Reed, Daniel C.; Harrison, John A.
2016-03-01
Recent decades have witnessed an exponential spread of low-oxygen regions in the coastal ocean due at least in-part to enhanced terrestrial nutrient inputs. As oxygen deprivation is a major stressor on marine ecosystems, there is a great need to quantitatively link shifts in nutrient loading with changes in oxygen concentrations. To this end, we have developed and here describe, evaluate, and apply the Coastal Ocean Oxygen Linked to Benthic Exchange And Nutrient Supply (COOLBEANS) model, a first-of-its-kind, spatially explicit (with 152 coastal segments) model, global model of coastal oxygen and nutrient dynamics. In COOLBEANS, benthic oxygen demand (BOD) is calculated using empirical models for aerobic respiration, iron reduction, and sulfate reduction, while oxygen supply is represented by a simple parameterization of exchange between surface and bottom waters. A nutrient cycling component translates shifts in riverine nutrient inputs into changes in organic matter delivery to sediments and, ultimately, oxygen uptake. Modeled BOD reproduces observations reasonably well (Nash-Sutcliffe efficiency = 0.71), and estimates of exchange between surface and bottom waters correlate with stratification. The model examines sensitivity of bottom water oxygen to changes in nutrient inputs and vertical exchange between surface and bottom waters, highlighting the importance of this vertical exchange in defining the susceptibility of a system to oxygen depletion. These sensitivities along with estimated maximum hypoxic areas that are supported by present day nutrient loads are consistent with existing hypoxic regions. Sensitivities are put into context by applying historic changes in nitrogen loading observed in the Gulf of Mexico to the global coastal ocean, demonstrating that such loads would drive many systems anoxic or even sulfidic.
Epistasis analysis for quantitative traits by functional regression model.
Zhang, Futao; Boerwinkle, Eric; Xiong, Momiao
2014-06-01
The critical barrier in interaction analysis for rare variants is that most traditional statistical methods for testing interactions were originally designed for testing the interaction between common variants and are difficult to apply to rare variants because of their prohibitive computational time and poor ability. The great challenges for successful detection of interactions with next-generation sequencing (NGS) data are (1) lack of methods for interaction analysis with rare variants, (2) severe multiple testing, and (3) time-consuming computations. To meet these challenges, we shift the paradigm of interaction analysis between two loci to interaction analysis between two sets of loci or genomic regions and collectively test interactions between all possible pairs of SNPs within two genomic regions. In other words, we take a genome region as a basic unit of interaction analysis and use high-dimensional data reduction and functional data analysis techniques to develop a novel functional regression model to collectively test interactions between all possible pairs of single nucleotide polymorphisms (SNPs) within two genome regions. By intensive simulations, we demonstrate that the functional regression models for interaction analysis of the quantitative trait have the correct type 1 error rates and a much better ability to detect interactions than the current pairwise interaction analysis. The proposed method was applied to exome sequence data from the NHLBI's Exome Sequencing Project (ESP) and CHARGE-S study. We discovered 27 pairs of genes showing significant interactions after applying the Bonferroni correction (P-values < 4.58 × 10(-10)) in the ESP, and 11 were replicated in the CHARGE-S study.
LINK PREDICTION MODEL FOR PAGE RANKING OF BLOGS
Directory of Open Access Journals (Sweden)
S.Geetha
2012-11-01
Full Text Available Social Network Analysis is mapping and measuring of relationships and flows of information between people, organizations, computers, or other information or knowledge processing entities. Social media systems such as blogs, LinkedIn, you tube are allows users to share content media, etc. Blog is a social network notepad service with consider on user interactions. In this paper study the link predictionand page ranking using MozRank algorithm using blog websites. It finds out how all the websites on the internet link to each other with the largest Link Intelligence database. As link data is also a component of search engine ranking, understanding the link profile of Search Engine positioning. Here the MozRank algorithm is using backlinks from blog websites and linking websites quality. Good websites with many backlinks which linking the corresponding WebPage give highly value of MozRank. MozRank can be improved a web page's by getting lots of links from semi-popular pages or a few links from very popular pages. The algorithm for page ranking must work differently and MozRank is more comprehensive and accurate than Goggle’s page rank. Another tool is Open Site Explorer that is ability to compare five URL's against each other. Open Site Explorer’s Compare Link Metrics option is how one measures pagelevel metrics, the other domain. This result can help to generate a chart form for the comparative URLs. A comparison chart of the important metrics for these pages is shown which makes it very clear and easy to compare the data between the five URL's.
Modeling the Effect of Polychromatic Light in Quantitative Absorbance Spectroscopy
Smith, Rachel; Cantrell, Kevin
2007-01-01
Laboratory experiment is conducted to give the students practical experience with the principles of electronic absorbance spectroscopy. This straightforward approach creates a powerful tool for exploring many of the aspects of quantitative absorbance spectroscopy.
DEFF Research Database (Denmark)
ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto
2015-01-01
We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting...... particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLAN) allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLAN semantics based on discrete-time Markov chains....... The Maude implementation of PFLAN is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning) and the expected average...
Herd immunity and pneumococcal conjugate vaccine: a quantitative model.
Haber, Michael; Barskey, Albert; Baughman, Wendy; Barker, Lawrence; Whitney, Cynthia G; Shaw, Kate M; Orenstein, Walter; Stephens, David S
2007-07-20
Invasive pneumococcal disease in older children and adults declined markedly after introduction in 2000 of the pneumococcal conjugate vaccine for young children. An empirical quantitative model was developed to estimate the herd (indirect) effects on the incidence of invasive disease among persons >or=5 years of age induced by vaccination of young children with 1, 2, or >or=3 doses of the pneumococcal conjugate vaccine, Prevnar (PCV7), containing serotypes 4, 6B, 9V, 14, 18C, 19F and 23F. From 1994 to 2003, cases of invasive pneumococcal disease were prospectively identified in Georgia Health District-3 (eight metropolitan Atlanta counties) by Active Bacterial Core surveillance (ABCs). From 2000 to 2003, vaccine coverage levels of PCV7 for children aged 19-35 months in Fulton and DeKalb counties (of Atlanta) were estimated from the National Immunization Survey (NIS). Based on incidence data and the estimated average number of doses received by 15 months of age, a Poisson regression model was fit, describing the trend in invasive pneumococcal disease in groups not targeted for vaccination (i.e., adults and older children) before and after the introduction of PCV7. Highly significant declines in all the serotypes contained in PCV7 in all unvaccinated populations (5-19, 20-39, 40-64, and >64 years) from 2000 to 2003 were found under the model. No significant change in incidence was seen from 1994 to 1999, indicating rates were stable prior to vaccine introduction. Among unvaccinated persons 5+ years of age, the modeled incidence of disease caused by PCV7 serotypes as a group dropped 38.4%, 62.0%, and 76.6% for 1, 2, and 3 doses, respectively, received on average by the population of children by the time they are 15 months of age. Incidence of serotypes 14 and 23F had consistent significant declines in all unvaccinated age groups. In contrast, the herd immunity effects on vaccine-related serotype 6A incidence were inconsistent. Increasing trends of non
Linking changes in epithelial morphogenesis to cancer mutations using computational modeling.
Directory of Open Access Journals (Sweden)
Katarzyna A Rejniak
Full Text Available Most tumors arise from epithelial tissues, such as mammary glands and lobules, and their initiation is associated with the disruption of a finely defined epithelial architecture. Progression from intraductal to invasive tumors is related to genetic mutations that occur at a subcellular level but manifest themselves as functional and morphological changes at the cellular and tissue scales, respectively. Elevated proliferation and loss of epithelial polarization are the two most noticeable changes in cell phenotypes during this process. As a result, many three-dimensional cultures of tumorigenic clones show highly aberrant morphologies when compared to regular epithelial monolayers enclosing the hollow lumen (acini. In order to shed light on phenotypic changes associated with tumor cells, we applied the bio-mechanical IBCell model of normal epithelial morphogenesis quantitatively matched to data acquired from the non-tumorigenic human mammary cell line, MCF10A. We then used a high-throughput simulation study to reveal how modifications in model parameters influence changes in the simulated architecture. Three parameters have been considered in our study, which define cell sensitivity to proliferative, apoptotic and cell-ECM adhesive cues. By mapping experimental morphologies of four MCF10A-derived cell lines carrying different oncogenic mutations onto the model parameter space, we identified changes in cellular processes potentially underlying structural modifications of these mutants. As a case study, we focused on MCF10A cells expressing an oncogenic mutant HER2-YVMA to quantitatively assess changes in cell doubling time, cell apoptotic rate, and cell sensitivity to ECM accumulation when compared to the parental non-tumorigenic cell line. By mapping in vitro mutant morphologies onto in silico ones we have generated a means of linking the morphological and molecular scales via computational modeling. Thus, IBCell in combination with 3D acini
A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model
2007-06-01
12TH ICCRTS “Adapting C2 to the 21st Century” A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model Tracks...Lenahan2 identified metrics and techniques for adversarial C2 process modeling . We intend to further that work by developing a set of adversarial process ...Approaches in an Adversarial Process Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK
Improving Power System Modeling. A Tool to Link Capacity Expansion and Production Cost Models
Energy Technology Data Exchange (ETDEWEB)
Diakov, Victor [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cole, Wesley [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sullivan, Patrick [National Renewable Energy Lab. (NREL), Golden, CO (United States); Brinkman, Gregory [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2015-11-01
Capacity expansion models (CEM) provide a high-level long-term view at the prospects of the evolving power system. In simulating the possibilities of long-term capacity expansion, it is important to maintain the viability of power system operation in the short-term (daily, hourly and sub-hourly) scales. Production-cost models (PCM) simulate routine power system operation on these shorter time scales using detailed load, transmission and generation fleet data by minimizing production costs and following reliability requirements. When based on CEM 'predictions' about generating unit retirements and buildup, PCM provide more detailed simulation for the short-term system operation and, consequently, may confirm the validity of capacity expansion predictions. Further, production cost model simulations of a system that is based on capacity expansion model solution are 'evolutionary' sound: the generator mix is the result of logical sequence of unit retirement and buildup resulting from policy and incentives. The above has motivated us to bridge CEM with PCM by building a capacity expansion - to - production cost model Linking Tool (CEPCoLT). The Linking Tool is built to onset capacity expansion model prescriptions onto production cost model inputs. NREL's ReEDS and Energy Examplar's PLEXOS are the capacity expansion and the production cost models, respectively. Via the Linking Tool, PLEXOS provides details of operation for the regionally-defined ReEDS scenarios.
Rico-Herrero, María. Teresa; Giralt, Santiago; Valero-Garcés, Blas L.; Vega, José Carlos
2010-05-01
limnological towards the sediments. These relationships were studied using a statistical approach, such as ordination analyses (Principal Component and Redundancy Analyses), time series (auto- and cross-correlation funtions) and generalised linear models (glm). The precipitation and temperature oscillations account for more than 75% of the total variance of the Tera River discharge, and only precipitation explained more than the 55%. The lake reacts inmediately to changes in the precipitation as shown by best correlation between the three variables occurring at 0 lag-time. When exploring the possible relationships between meteorological and the limnological and nutrient datasets, it was evidenced that total phosphorous showed the best fit with 28% of the total explained variance. The best correlation was also observed at 0 lag, indicating that the main nutrient input occurs by the Tera River. Principal Component Analysis (PCA) on the XRF dataset showed that the first eigenvector explained more than 44% of the total variance and it was related mainly to the organic matter changes. Oscillations of this first eigenvector have been interpreted in terms of fluctuations of the primary productivity of Lake Sanabria. The comparison between the reconstructed primary productivity with the total phosphorous highlighted that lakes generally act as a low-pass filters, smoothing the climate signal when transfers it to the sediments. The explained variance between the smoothed reconstructed primary productivity and the total phosphorous is 24%, similar to that between the total phosphorous and the Tera River discharge. This study opens the possibility to a quantitative reconstruction of past climate data (temperature and precipitation) from high-resolution sedimentological datasets.
ASSETS MANAGEMENT - A CONCEPTUAL MODEL DECOMPOSING VALUE FOR THE CUSTOMER AND A QUANTITATIVE MODEL
Directory of Open Access Journals (Sweden)
Susana Nicola
2015-03-01
Full Text Available In this paper we describe de application of a modeling framework, the so-called Conceptual Model Decomposing Value for the Customer (CMDVC, in a Footwear Industry case study, to ascertain the usefulness of this approach. The value networks were used to identify the participants, both tangible and intangible deliverables/endogenous and exogenous assets, and the analysis of their interactions as the indication for an adequate value proposition. The quantitative model of benefits and sacrifices, using the Fuzzy AHP method, enables the discussion of how the CMDVC can be applied and used in the enterprise environment and provided new relevant relations between perceived benefits (PBs.
Linking river management to species conservation using dynamic landscape scale models
Freeman, Mary C.; Buell, Gary R.; Hay, Lauren E.; Hughes, W. Brian; Jacobson, Robert B.; Jones, John W.; Jones, S.A.; LaFontaine, Jacob H.; Odom, Kenneth R.; Peterson, James T.; Riley, Jeffrey W.; Schindler, J. Stephen; Shea, C.; Weaver, J.D.
2013-01-01
Efforts to conserve stream and river biota could benefit from tools that allow managers to evaluate landscape-scale changes in species distributions in response to water management decisions. We present a framework and methods for integrating hydrology, geographic context and metapopulation processes to simulate effects of changes in streamflow on fish occupancy dynamics across a landscape of interconnected stream segments. We illustrate this approach using a 482 km2 catchment in the southeastern US supporting 50 or more stream fish species. A spatially distributed, deterministic and physically based hydrologic model is used to simulate daily streamflow for sub-basins composing the catchment. We use geographic data to characterize stream segments with respect to channel size, confinement, position and connectedness within the stream network. Simulated streamflow dynamics are then applied to model fish metapopulation dynamics in stream segments, using hypothesized effects of streamflow magnitude and variability on population processes, conditioned by channel characteristics. The resulting time series simulate spatially explicit, annual changes in species occurrences or assemblage metrics (e.g. species richness) across the catchment as outcomes of management scenarios. Sensitivity analyses using alternative, plausible links between streamflow components and metapopulation processes, or allowing for alternative modes of fish dispersal, demonstrate large effects of ecological uncertainty on model outcomes and highlight needed research and monitoring. Nonetheless, with uncertainties explicitly acknowledged, dynamic, landscape-scale simulations may prove useful for quantitatively comparing river management alternatives with respect to species conservation.
Curing critical links in oscillator networks as power grid models
Rohden, Martin; Timme, Marc; Meyer-Ortmanns, Hildegard
2015-01-01
Modern societies crucially depend on the robust supply with electric energy. Blackouts of power grids can thus have far reaching consequences. During a blackout, often the failure of a single infrastructure, such as a critical transmission line, results in several subsequent failures that spread across large parts of the network. Preventing such large-scale outages is thus key for assuring a reliable power supply. Here we present a non-local curing strategy for oscillatory power grid networks based on the global collective redistribution of loads. We first identify critical links and compute residual capacities on alternative paths on the remaining network from the original flows. For each critical link, we upgrade lines that constitute bottlenecks on such paths. We demonstrate the viability of this strategy for random ensembles of network topologies as well as topologies derived from real transmission grids and compare the nonlocal strategy against local back-ups of critical links. These strategies are indep...
Grootel, L. van; Wesel, F. van; O'Mara-Eves, A.; Thomas, J.; Hox, J.; Boeije, H.
2017-01-01
Background: This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the
Network link dimensioning : a measurement & modeling based approach
Meent, van de Remco
2006-01-01
Adequate network link dimensioning requires a thorough insight into the interrelationship between: (i) the traffic offered (in terms of the average load, but also its fluctuations), (ii) the desired level of performance, and (iii) the required bandwidth capacity. It is clear that more capacity is ne
Simakov, Andrei N; Chacón, L
2008-09-05
Dissipation-independent, or "fast", magnetic reconnection has been observed computationally in Hall magnetohydrodynamics (MHD) and predicted analytically in electron MHD. However, a quantitative analytical theory of reconnection valid for arbitrary ion inertial lengths, d{i}, has been lacking and is proposed here for the first time. The theory describes a two-dimensional reconnection diffusion region, provides expressions for reconnection rates, and derives a formal criterion for fast reconnection in terms of dissipation parameters and d{i}. It also confirms the electron MHD prediction that both open and elongated diffusion regions allow fast reconnection, and reveals strong dependence of the reconnection rates on d{i}.
Postma, Johannes A; Schurr, Ulrich; Fiorani, Fabio
2014-01-01
In recent years the study of root phenotypic plasticity in response to sub-optimal environmental factors and the genetic control of these responses have received renewed attention. As a path to increased productivity, in particular for low fertility soils, several applied research projects worldwide target the improvement of crop root traits both in plant breeding and biotechnology contexts. To assist these tasks and address the challenge of optimizing root growth and architecture for enhanced mineral resource use, the development of realistic simulation models is of great importance. We review this research field from a modeling perspective focusing particularly on nutrient acquisition strategies for crop production on low nitrogen and low phosphorous soils. Soil heterogeneity and the dynamics of nutrient availability in the soil pose a challenging environment in which plants have to forage efficiently for nutrients in order to maintain their internal nutrient homeostasis throughout their life cycle. Mathematical models assist in understanding plant growth strategies and associated root phenes that have potential to be tested and introduced in physiological breeding programs. At the same time, we stress that it is necessary to carefully consider model assumptions and development from a whole plant-resource allocation perspective and to introduce or refine modules simulating explicitly root growth and architecture dynamics through ontogeny with reference to key factors that constrain root growth. In this view it is important to understand negative feedbacks such as plant-plant competition. We conclude by briefly touching on available and developing technologies for quantitative root phenotyping from lab to field, from quantification of partial root profiles in the field to 3D reconstruction of whole root systems. Finally, we discuss how these approaches can and should be tightly linked to modeling to explore the root phenome.
2011-05-18
... COMMISSION NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection... issued for public comment a document entitled: NUREG/CR-XXXX, ``Development of Quantitative Software... development of regulatory guidance for using risk information related to digital systems in the...
Sić, Siniša; Maier, Norbert M; Rizzi, Andreas M
2015-08-21
Investigation of oligosaccharides attached to proteins as post-translational modification remains an important research field in the area of glycoproteomics as well as in biotechnology. The development of new tools for qualitative and quantitative analysis of glycans has gained high importance in recent years. This is particularly true with O-glycans for which quantitative data are still underrepresented in literature. This fact is probably due to the absence of an enzyme for general release of O-linked saccharides from glycoproteins and due to their low ionization yield in mass spectrometry (MS). In this paper, a method is established aimed at improved qualitative and quantitative analysis of mucin-type O-glycans. A chemical reaction combining release and derivatization of O-glycans in one step is combined here with mass spectrometric quantification. For the purpose of improved quantitative analysis, stable-isotope coded labeling by d0/d5 1-phenyl-3-methyl-5-pyrazolidone (PMP) was performed. The "heavy"-version of this label, penta-deutero (d5)-PMP, was synthesized for this purpose. Beneath improving the reproducibility of quantitation, PMP derivatization contributed to an enhancement of ionization yields in MS. By introducing an internal standard (e.g. GlcNAc3) the reproducibility for quantification can be improved. For higher abundant O-glycans a mean coefficient of variation (CV) less than 6% could be attained, for very low abundant CV values between 15 and 20%. For the determination of O-glycan profiles in mixtures, a HPLC separation was combined with a high resolution Qq-oaTOF instrument. RP-type stationary phases were successful in separating glycan species including some of isomeric ones. This separation step was particularly useful for removing of salts avoiding so the presence of various sodium clusters in the MS spectrum. Copyright © 2015 Elsevier B.V. All rights reserved.
An independent pair-link model of simple fluids
Bonneville, Richard
2016-01-01
A new approach to thermodynamics of simple fluids is presented. The partition function is first expressed in the reciprocal space, it is argued that the links (p,q) between 2 molecules can reasonably in the thermodynamical limit be considered as a set nearly independent objects characterized by the dynamical variables . It is then possible to derive an expression of the pair correlation function. The results, which are independent of the exact shape of the intermolecular potential, are applied to the simple case of hard sphere fluids.
Photon-tissue interaction model for quantitative assessment of biological tissues
Lee, Seung Yup; Lloyd, William R.; Wilson, Robert H.; Chandra, Malavika; McKenna, Barbara; Simeone, Diane; Scheiman, James; Mycek, Mary-Ann
2014-02-01
In this study, we describe a direct fit photon-tissue interaction model to quantitatively analyze reflectance spectra of biological tissue samples. The model rapidly extracts biologically-relevant parameters associated with tissue optical scattering and absorption. This model was employed to analyze reflectance spectra acquired from freshly excised human pancreatic pre-cancerous tissues (intraductal papillary mucinous neoplasm (IPMN), a common precursor lesion to pancreatic cancer). Compared to previously reported models, the direct fit model improved fit accuracy and speed. Thus, these results suggest that such models could serve as real-time, quantitative tools to characterize biological tissues assessed with reflectance spectroscopy.
Modular System Modeling for Quantitative Reliability Evaluation of Technical Systems
Directory of Open Access Journals (Sweden)
Stephan Neumann
2016-01-01
Full Text Available In modern times, it is necessary to offer reliable products to match the statutory directives concerning product liability and the high expectations of customers for durable devices. Furthermore, to maintain a high competitiveness, engineers need to know as accurately as possible how long their product will last and how to influence the life expectancy without expensive and time-consuming testing. As the components of a system are responsible for the system reliability, this paper introduces and evaluates calculation methods for life expectancy of common machine elements in technical systems. Subsequently, a method for the quantitative evaluation of the reliability of technical systems is proposed and applied to a heavy-duty power shift transmission.
Improvement of the ID model for quantitative network data
DEFF Research Database (Denmark)
Sørensen, Peter Borgen; Damgaard, Christian Frølund; Dupont, Yoko Luise
2015-01-01
)1. This presentation will illustrate the application of the ID method based on a data set which consists of counts of visits by 152 pollinator species to 16 plant species. The method is based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi...... reproduce the high number of zero valued cells in the data set and mimic the sampling distribution. 1 Sørensen et al, Journal of Pollination Ecology, 6(18), 2011, pp129-139......Many interactions are often poorly registered or even unobserved in empirical quantitative networks. Hence, the output of the statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks...
Quantitative assessment of meteorological and tropospheric Zenith Hydrostatic Delay models
Zhang, Di; Guo, Jiming; Chen, Ming; Shi, Junbo; Zhou, Lv
2016-09-01
Tropospheric delay has always been an important issue in GNSS/DORIS/VLBI/InSAR processing. Most commonly used empirical models for the determination of tropospheric Zenith Hydrostatic Delay (ZHD), including three meteorological models and two empirical ZHD models, are carefully analyzed in this paper. Meteorological models refer to UNB3m, GPT2 and GPT2w, while ZHD models include Hopfield and Saastamoinen. By reference to in-situ meteorological measurements and ray-traced ZHD values of 91 globally distributed radiosonde sites, over a four-years period from 2010 to 2013, it is found that there is strong correlation between errors of model-derived values and latitudes. Specifically, the Saastamoinen model shows a systematic error of about -3 mm. Therefore a modified Saastamoinen model is developed based on the "best average" refractivity constant, and is validated by radiosonde data. Among different models, the GPT2w and the modified Saastamoinen model perform the best. ZHD values derived from their combination have a mean bias of -0.1 mm and a mean RMS of 13.9 mm. Limitations of the present models are discussed and suggestions for further improvements are given.
Directory of Open Access Journals (Sweden)
Kanellopoulos AJ
2013-02-01
Full Text Available A John Kanellopoulos1,2 George Asimellis11Laservision.gr Institute, Athens, Greece; 2New York University Medical School, New York, NY, USAPurpose: To introduce a novel, noninvasive technique to determine the depth and extent of anterior corneal stroma changes induced by collagen cross-linking (CXL using quantitative analysis of high-resolution anterior-segment optical coherence tomography (OCT post-operative images.Setting: Private clinical ophthalmology practice.Patients and methods: Two groups of corneal cross-sectional images obtained with the OptoVue RTVue anterior-segment OCT system were studied: group A (control consisted of unoperated, healthy corneas, with the exception of possible refractive errors. The second group consisted of keratoconic corneas with CXL that were previously operated on. The two groups were investigated for possible quantitative evidence of changes induced by the CXL, and specifically, the depth, horizontal extent, as well as the cross-sectional area of intrastromal hyper-reflective areas (defined in our study as the area consisting of pixels with luminosity greater than the mean +2 × standard deviation of the entire stromal cross section within the corneal stroma.Results: In all images of the second group (keratoconus patients treated with CXL there was evidence of intrastromal hyper-reflective areas. The hyper-reflective areas ranged from 0.2% to 8.8% of the cross-sectional area (mean ± standard deviation; 3.46% ± 1.92%. The extent of the horizontal hyper-reflective area ranged from 4.42% to 99.2% (56.2% ± 23.35% of the cornea image, while the axial extent (the vertical extent in the image ranged from 40.00% to 86.67% (70.98% ± 7.85%. There was significant statistical difference (P < 0.02 in these values compared to the control group, in which, by application of the same criteria, the same hyper-reflective area (owing to signal noise ranged from 0.00% to 2.51% (0.74% ± 0.63%.Conclusion: Herein, we introduce a
Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...
Gao, Y.; Balaram, P.; Islam, S.
2009-12-01
Water issues and problems have bewildered humankind for a long time yet a systematic approach for understanding such issues remain elusive. This is partly because many water-related problems are framed from a contested terrain in which many actors (individuals, communities, businesses, NGOs, states, and countries) compete to protect their own and often conflicting interests. We argue that origin of many water problems may be understood as a dynamic consequence of competition, interconnections, and feedback among variables in the Natural and Societal Systems (NSSs). Within the natural system, we recognize that triple constraints on water- water quantity (Q), water quality (P), and ecosystem (E)- and their interdependencies and feedback may lead to conflicts. Such inherent and multifaceted constraints of the natural water system are exacerbated often at the societal boundaries. Within the societal system, interdependencies and feedback among values and norms (V), economy (C), and governance (G) interact in various ways to create intractable contextual differences. The observation that natural and societal systems are linked is not novel. Our argument here, however, is that rigid disciplinary boundaries between these two domains will not produce solutions to the water problems we are facing today. The knowledge needed to address water problems need to go beyond scientific assessment in which societal variables (C, G, and V) are treated as exogenous or largely ignored, and policy research that does not consider the impact of natural variables (E, P, and Q) and that coupling among them. Consequently, traditional quantitative methods alone are not appropriate to address the dynamics of water conflicts, because we cannot quantify the societal variables and the exact mathematical relationships among the variables are not fully known. On the other hand, conventional qualitative study in societal domain has mainly been in the form of individual case studies and therefore
Quantitative statistical assessment of conditional models for synthetic aperture radar.
DeVore, Michael D; O'Sullivan, Joseph A
2004-02-01
Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level.
M.J. Becker (Martin); S. de Marie (Siem); D. Willemse; H.A. Verbrugh (Henri); I.A.J.M. Bakker-Woudenberg (Irma)
2000-01-01
textabstractTwo diagnostic tests, an Aspergillus-specific PCR and an enzyme-linked immunosorbent assay (ELISA) for the quantitative determination of galactomannan, were compared for diagnosing and monitoring invasive pulmonary aspergillosis. Persistently neutropenic rat
A link based network route choice model with unrestricted choice set
DEFF Research Database (Denmark)
Fosgerau, Mogens; Frejinger, Emma; Karlstrom, Anders
2013-01-01
This paper considers the path choice problem, formulating and discussing an econometric random utility model for the choice of path in a network with no restriction on the choice set. Starting from a dynamic specification of link choices we show that it is equivalent to a static model...... additive. The model is applied to data recording path choices in a network with more than 3000 nodes and 7000 links....
Linking animal models of psychosis to computational models of dopamine function.
Smith, Andrew J; Li, Ming; Becker, Suzanna; Kapur, Shitij
2007-01-01
Psychosis is linked to dysregulation of the neuromodulator dopamine and antipsychotic drugs (APDs) work by blocking dopamine receptors. Dopamine-modulated disruption of latent inhibition (LI) and conditioned avoidance response (CAR) have served as standard animal models of psychosis and antipsychotic action, respectively. Meanwhile, the 'temporal difference' algorithm (TD) has emerged as the leading computational model of dopamine neuron firing. In this report TD is extended to include action at the level of dopamine receptors in order to explain a number of behavioral phenomena including the dose-dependent disruption of CAR by APDs, the temporal dissociation of the effects of APDs on receptors vs behavior, the facilitation of LI by APDs, and the disruption of LI by amphetamine. The model also predicts an APD-induced change to the latency profile of CAR--a novel prediction that is verified experimentally. The model's primary contribution is to link dopamine neuron firing, receptor manipulation, and behavior within a common formal framework that may offer insights into clinical observations.
A Quantitative Causal Model Theory of Conditional Reasoning
Fernbach, Philip M.; Erb, Christopher D.
2013-01-01
The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…
Towards the quantitative evaluation of visual attention models.
Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K
2015-11-01
Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations.
Assessment of Quantitative Precipitation Forecasts from Operational NWP Models (Invited)
Sapiano, M. R.
2010-12-01
Previous work has shown that satellite and numerical model estimates of precipitation have complimentary strengths, with satellites having greater skill at detecting convective precipitation events and model estimates having greater skill at detecting stratiform precipitation. This is due in part to the challenges associated with retrieving stratiform precipitation from satellites and the difficulty in resolving sub-grid scale processes in models. These complimentary strengths can be exploited to obtain new merged satellite/model datasets, and several such datasets have been constructed using reanalysis data. Whilst reanalysis data are stable in a climate sense, they also have relatively coarse resolution compared to the satellite estimates (many of which are now commonly available at quarter degree resolution) and they necessarily use fixed forecast systems that are not state-of-the-art. An alternative to reanalysis data is to use Operational Numerical Weather Prediction (NWP) model estimates, which routinely produce precipitation with higher resolution and using the most modern techniques. Such estimates have not been combined with satellite precipitation and their relative skill has not been sufficiently assessed beyond model validation. The aim of this work is to assess the information content of the models relative to satellite estimates with the goal of improving techniques for merging these data types. To that end, several operational NWP precipitation forecasts have been compared to satellite and in situ data and their relative skill in forecasting precipitation has been assessed. In particular, the relationship between precipitation forecast skill and other model variables will be explored to see if these other model variables can be used to estimate the skill of the model at a particular time. Such relationships would be provide a basis for determining weights and errors of any merged products.
Generalized Modelling of the Stabilizer Link and Static Simulation Using FEM
Cofaru, Nicolae Florin; Roman, Lucian Ion; Oleksik, Valentin; Pascu, Adrian
2016-12-01
This paper proposes an organological approach of one of the components of front suspension, namely anti-roll power link. There will be realized a CAD 3D modelling of this power link. 3D modelling is generalized and there were used the powers of Catia V5R20 software. Parameterized approach provides a high flexibility in the design, meaning that dimensional and shape changes of the semi-power link are very easy to perform just by changing some parameters. Several new versions are proposed for the anti-roll power link body. At the end of the work, it is made a static analysis of the semi-power link model used in the suspension of vehicles OPEL ASTRA G, ZAFIRA, MERIVA, and constructive optimization of its body.
Institute of Scientific and Technical Information of China (English)
Liang; Guo; Kai; Wang; Junyu; Chen; Derun; Huang; Yeyang; Fan; Jieyun; Zhuang
2013-01-01
Grain weight is a key determinant of grain yield in rice. Three sets of rice populations with overlapping segregating regions in isogenic backgrounds were established in the generations of BC2 F5, BC2 F6 and BC2 F7, derived from Zhenshan 97 and Milyang 46, and used for dissection of quantitative trait loci(QTL) for grain weight. Two QTL linked in repulsion phase on the long arm of chromosome 1 were separated. One was located between simple sequence repeat(SSR) markers RM11437 and RM11615, having a smaller additive effect with the enhancing allele from the maintainer line Zhenshan 97 and a partially dominant effect for increasing grain weight. The other was located between SSR markers RM11615 and RM11800, having a larger additive effect with the enhancing allele from the restorer line Milyang 46 and a partially dominant effect for increasing grain weight. When the two QTL segregated simultaneously, a residual additive effect with the enhancing allele from Milyang 46 and an over-dominance effect for increasing grain weight were detected. This suggests that dominant QTL linked in repulsion phase might play an important role in heterosis in rice. Our study also indicates that the use of populations with overlapping segregating regions in isogenic backgrounds is helpful for the dissection of minor linked QTL.
Quantitative Methods for Comparing Different Polyline Stream Network Models
Energy Technology Data Exchange (ETDEWEB)
Danny L. Anderson; Daniel P. Ames; Ping Yang
2014-04-01
Two techniques for exploring relative horizontal accuracy of complex linear spatial features are described and sample source code (pseudo code) is presented for this purpose. The first technique, relative sinuosity, is presented as a measure of the complexity or detail of a polyline network in comparison to a reference network. We term the second technique longitudinal root mean squared error (LRMSE) and present it as a means for quantitatively assessing the horizontal variance between two polyline data sets representing digitized (reference) and derived stream and river networks. Both relative sinuosity and LRMSE are shown to be suitable measures of horizontal stream network accuracy for assessing quality and variation in linear features. Both techniques have been used in two recent investigations involving extracting of hydrographic features from LiDAR elevation data. One confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes yielded better stream network delineations, based on sinuosity and LRMSE, when using LiDAR-derived DEMs. The other demonstrated a new method of delineating stream channels directly from LiDAR point clouds, without the intermediate step of deriving a DEM, showing that the direct delineation from LiDAR point clouds yielded an excellent and much better match, as indicated by the LRMSE.
Keong, Bun Poh; Siraj, Siti Shapor; Daud, Siti Khalijah; Panandam, Jothi Malar; Rahman, Arina Nadia Abdul
2014-02-15
A preliminary linkage map was constructed by applying backcross and testcross strategy using microsatellite (SSR) markers developed for Xiphophorus and Poecilia reticulata in ornamental fish, molly Poecilia sp. The linkage map having 18 SSR loci consisted of four linkage groups that spanned a map size of 516.1cM. Association between genotypes and phenotypes was tested in a random fashion and QTL for dorsal fin length was found to be linked to locus Msb069 on linkage group 2. Coincidentally, locus Msb069 was also reported as putative homologue primer pairs containing SSRs repeat motif which encoded hSMP-1, a sex determining locus. Dorsal fin length particularly in males of Poecilia latipinna is an important feature during courtship display. Therefore, we speculate that both dorsal fin length and putative hSMP-1 gene formed a close proximity to male sexual characteristics.
Salah, Ahmad M.; Nelson, E. James; Williams, Gustavious P.
2010-04-01
We present algorithms and tools we developed to automatically link an overland flow model to a hydrodynamic water quality model with different spatial and temporal discretizations. These tools run the linked models which provide a stochastic simulation frame. We also briefly present the tools and algorithms we developed to facilitate and analyze stochastic simulations of the linked models. We demonstrate the algorithms by linking the Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model for overland flow with the CE-QUAL-W2 model for water quality and reservoir hydrodynamics. GSSHA uses a two-dimensional horizontal grid while CE-QUAL-W2 uses a two-dimensional vertical grid. We implemented the algorithms and tools in the Watershed Modeling System (WMS) which allows modelers to easily create and use models. The algorithms are general and could be used for other models. Our tools create and analyze stochastic simulations to help understand uncertainty in the model application. While a number of examples of linked models exist, the ability to perform automatic, unassisted linking is a step forward and provides the framework to easily implement stochastic modeling studies.
Directory of Open Access Journals (Sweden)
Ahmad M Salah
2010-12-01
Full Text Available We present algorithms and tools we developed to automatically link an overland flow model to a hydrodynamic water quality model with different spatial and temporal discretizations. These tools run the linked models which provide a stochastic simulation frame. We also briefly present the tools and algorithms we developed to facilitate and analyze stochastic simulations of the linked models. We demonstrate the algorithms by linking the Gridded Surface Subsurface Hydrologic Analysis (GSSHA model for overland flow with the CE-QUAL-W2 model for water quality and reservoir hydrodynamics. GSSHA uses a two-dimensional horizontal grid while CE-QUAL-W2 uses a two-dimensional vertical grid. We implemented the algorithms and tools in the Watershed Modeling System (WMS which allows modelers to easily create and use models. The algorithms are general and could be used for other models. Our tools create and analyze stochastic simulations to help understand uncertainty in the model application. While a number of examples of linked models exist, the ability to perform automatic, unassisted linking is a step forward and provides the framework to easily implement stochastic modeling studies.
A Tiered Model for Linking Students to the Community
Meyer, Laura Landry; Gerard, Jean M.; Sturm, Michael R.; Wooldridge, Deborah G.
2016-01-01
A tiered practice model (introductory, pre-internship, and internship) embedded in the curriculum facilitates community engagement and creates relevance for students as they pursue a professional identity in Human Development and Family Studies. The tiered model integrates high-impact teaching practices (HIP) and student engagement pedagogies…
A Tiered Model for Linking Students to the Community
Meyer, Laura Landry; Gerard, Jean M.; Sturm, Michael R.; Wooldridge, Deborah G.
2016-01-01
A tiered practice model (introductory, pre-internship, and internship) embedded in the curriculum facilitates community engagement and creates relevance for students as they pursue a professional identity in Human Development and Family Studies. The tiered model integrates high-impact teaching practices (HIP) and student engagement pedagogies…
Digital clocks: simple Boolean models can quantitatively describe circadian systems.
Akman, Ozgur E; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J; Ghazal, Peter
2012-09-07
The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day-night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we anticipate
Design-oriented analytic model of phase and frequency modulated optical links
Monsurrò, Pietro; Saitto, Antonio; Tommasino, Pasquale; Trifiletti, Alessandro; Vannucci, Antonello; Cimmino, Rosario F.
2016-07-01
An analytic design-oriented model of phase and frequency modulated microwave optical links has been developed. The models are suitable for design of broadband high dynamic range optical links for antenna remoting and optical beamforming, where noise and linearity of the subsystems are a concern Digital filter design techniques have been applied to the design of optical filters working as frequency discriminator, that are the bottleneck in terms of linearity for these systems. The models of frequency modulated, phase modulated, and coherent I/Q link have been used to compare performance of the different architectures in terms of linearity and SFDR.
McCullough, Cathy Bolton
An innovative session was conducted to introduce session participants to a concept and researched model for linking organizational culture and performance. The session goals were as follows: (1) give participants a working knowledge of the link between business culture and key business performance indicators; (2) give participants a hands-on…
AddRemove : A new link model for use in QM/MM studies
Swart, M
2003-01-01
The division of a system under study in a quantum mechanical (QM) and a classical system in QM/MM molecular mechanical calculations is sometimes very natural, but a problem arises in the case of bonds crossing the QM/MM boundary. A new link model that uses a capping (link) atom to satisfy the valenc
D1.3 -- Short Report on the First Draft Multi-link Channel Model
DEFF Research Database (Denmark)
Pedersen, Troels; Raulefs, Ronald; Steinboeck, Gerhard
-link large scale parameters, such as rms delay spread, from outdoor to indoor scenarios and for different carrier frequencies. Furthermore indoor radio propagation in in-room scenarios is considered and first modeling approaches, potantially suitable for multi-link channels are presented. A sparse estimator...
Modeling the video distribution link in the Next Generation Optical Access Networks
DEFF Research Database (Denmark)
Amaya, F.; Cárdenas, A.; Tafur Monroy, Idelfonso
2011-01-01
In this work we present a model for the design and optimization of the video distribution link in the next generation optical access network. We analyze the video distribution performance in a SCM-WDM link, including the noise, the distortion and the fiber optic nonlinearities. Additionally, we...
Derivation of free energy expressions for tube models from coarse-grained slip-link models
Steenbakkers, Rudi J. A.; Schieber, Jay D.
2012-07-01
We present the free energy of a single-chain mean-field model for polymer melt dynamics, which uses a continuous (tube-like) approximation to the discrete entanglements with surrounding chains, but, in contrast to previous tube models, includes fluctuations in the number density of Kuhn steps along the primitive path and in the degree of entanglement. The free energy is obtained from that of the slip-link model with fluctuating entanglement positions [J. D. Schieber and K. Horio, J. Chem. Phys. 132, 074905 (2010)], 10.1063/1.3314727 by taking the continuous limit of (functions of) the discrete Kuhn-step numbers and end-to-end vectors of the strands between entanglements. This coarse-graining from a more-detailed level of description has the advantage that no ad hoc arguments need to be introduced. Moreover, the thermodynamic consistency of the slip-link model [J. D. Schieber, J. Non-Equilib. Thermodyn. 28, 179 (2003)], 10.1515/JNETDY.2003.010 can be preserved. Fluctuations in the positions of entanglements lead to a harmonic bending term in the free energy of the continuous chain, similar to that derived by Read et al. [Macromolecules 41, 6843 (2008)], 10.1021/ma8009855 starting from a modified GLaMM model [R. S. Graham, A. E. Likhtman, T. C. B. McLeish, and S. T. Milner, J. Rheol. 47, 1171 (2003)], 10.1122/1.1595099. If these fluctuations are set to zero, the free energy becomes purely Gaussian and corresponds to the continuous limit of the original slip-link model, with affinely moving entanglements [J. D. Schieber, J. Chem. Phys. 118, 5162 (2003)], 10.1063/1.1553764. The free energy reduces to that of Read et al. under their assumptions of a homogeneous Kuhn-step number density and a constant degree of entanglement. Finally, we show how a transformation of the primitive-path coordinate can be applied to make the degree of entanglement an outcome of the model instead of a variable. In summary, this paper constitutes a first step towards a unified mathematical
Probabilistic Quantitative Precipitation Forecasting Using Ensemble Model Output Statistics
Scheuerer, Michael
2013-01-01
Statistical post-processing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a post-processing method that generates full predictive probability distributions for precipitation accumulations based on ensemble model output statistics (EMOS). We model precipitation amounts by a generalized extreme value distribution that is left-censored at zero. This distribution permits modelling precipitation on the original scale without prior transformation of the data. A closed form expression for its continuous rank probability score can be derived and permits computationally efficient model fitting. We discuss an extension of our approach that incorporates further statistics characterizing the spatial variability of precipitation amounts in the vicinity of the location of interest. The proposed EMOS method is applied to daily 18-h forecasts of 6-h accumulated precipitation over Germany in 2011 using the COSMO-DE ensemble prediction system operated by the Germa...
Quantitative modeling of degree-degree correlation in complex networks
Niño, Alfonso
2013-01-01
This paper presents an approach to the modeling of degree-degree correlation in complex networks. Thus, a simple function, \\Delta(k', k), describing specific degree-to- degree correlations is considered. The function is well suited to graphically depict assortative and disassortative variations within networks. To quantify degree correlation variations, the joint probability distribution between nodes with arbitrary degrees, P(k', k), is used. Introduction of the end-degree probability function as a basic variable allows using group theory to derive mathematical models for P(k', k). In this form, an expression, representing a family of seven models, is constructed with the needed normalization conditions. Applied to \\Delta(k', k), this expression predicts a nonuniform distribution of degree correlation in networks, organized in two assortative and two disassortative zones. This structure is actually observed in a set of four modeled, technological, social, and biological networks. A regression study performed...
Quantitative modeling of selective lysosomal targeting for drug design
DEFF Research Database (Denmark)
Trapp, Stefan; Rosania, G.; Horobin, R.W.;
2008-01-01
Lysosomes are acidic organelles and are involved in various diseases, the most prominent is malaria. Accumulation of molecules in the cell by diffusion from the external solution into cytosol, lysosome and mitochondrium was calculated with the Fick–Nernst–Planck equation. The cell model considers....... This demonstrates that the cell model can be a useful tool for the design of effective lysosome-targeting drugs with minimal off-target interactions....
Linking Complexity and Sustainability Theories: Implications for Modeling Sustainability Transitions
Directory of Open Access Journals (Sweden)
Camaren Peter
2014-03-01
Full Text Available In this paper, we deploy a complexity theory as the foundation for integration of different theoretical approaches to sustainability and develop a rationale for a complexity-based framework for modeling transitions to sustainability. We propose a framework based on a comparison of complex systems’ properties that characterize the different theories that deal with transitions to sustainability. We argue that adopting a complexity theory based approach for modeling transitions requires going beyond deterministic frameworks; by adopting a probabilistic, integrative, inclusive and adaptive approach that can support transitions. We also illustrate how this complexity-based modeling framework can be implemented; i.e., how it can be used to select modeling techniques that address particular properties of complex systems that we need to understand in order to model transitions to sustainability. In doing so, we establish a complexity-based approach towards modeling sustainability transitions that caters for the broad range of complex systems’ properties that are required to model transitions to sustainability.
Strengthening the weak link: Built Environment modelling for loss analysis
Millinship, I.
2012-04-01
Methods to analyse insured losses from a range of natural perils, including pricing by primary insurers and catastrophe modelling by reinsurers, typically lack sufficient exposure information. Understanding the hazard intensity in terms of spatial severity and frequency is only the first step towards quantifying the risk of a catastrophic event. For any given event we need to know: Are any structures affected? What type of buildings are they? How much damaged occurred? How much will the repairs cost? To achieve this, detailed exposure information is required to assess the likely damage and to effectively calculate the resultant loss. Modelling exposures in the Built Environment therefore plays as important a role in understanding re/insurance risk as characterising the physical hazard. Across both primary insurance books and aggregated reinsurance portfolios, the location of a property (a risk) and its monetary value is typically known. Exactly what that risk is in terms of detailed property descriptors including structure type and rebuild cost - and therefore its vulnerability to loss - is often omitted. This data deficiency is a primary source of variations between modelled losses and the actual claims value. Built Environment models are therefore required at a high resolution to describe building attributes that relate vulnerability to property damage. However, national-scale household-level datasets are often not computationally practical in catastrophe models and data must be aggregated. In order to provide more accurate risk analysis, we have developed and applied a methodology for Built Environment modelling for incorporation into a range of re/insurance applications, including operational models for different international regions and different perils and covering residential, commercial and industry exposures. Illustrated examples are presented, including exposure modelling suitable for aggregated reinsurance analysis for the UK and bespoke high resolution
Modeling and control of a hydraulically actuated flexible-prismatic link robot
Energy Technology Data Exchange (ETDEWEB)
Love, L.; Kress, R.; Jansen, J.
1996-12-01
Most of the research related to flexible link manipulators to date has focused on single link, fixed length, single plane of vibration test beds. In addition, actuation has been predominantly based upon electromagnetic motors. Ironically, these elements are rarely found in the existing industrial long reach systems. This manuscript describes a new hydraulically actuated, long reach manipulator with a flexible prismatic link at Oak Ridge National Laboratory (ORNL). Focus is directed towards both modeling and control of hydraulic actuators as well as flexible links that have variable natural frequencies.
Pargett, Michael; Umulis, David M
2013-07-15
Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.
Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.
2011-01-01
This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the mode
Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.
2011-01-01
This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the mode
Statistical analysis of probabilistic models of software product lines with quantitative constraints
DEFF Research Database (Denmark)
Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto
2015-01-01
We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...
Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.
2011-01-01
This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the
A suite of models to support the quantitative assessment of spread in pest risk analysis
Robinet, C.; Kehlenbeck, H.; Werf, van der W.
2012-01-01
In the frame of the EU project PRATIQUE (KBBE-2007-212459 Enhancements of pest risk analysis techniques) a suite of models was developed to support the quantitative assessment of spread in pest risk analysis. This dataset contains the model codes (R language) for the four models in the suite. Three
Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model
Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko
2015-04-01
One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1
The power of a good idea: quantitative modeling of the spread of ideas from epidemiological models
Bettencourt, L M A; Kaiser, D I; Castillo-Chavez, C; Bettencourt, Lu\\'{i}s M.A.; Cintr\\'{o}n-Arias, Ariel; Kaiser, David I.; Castillo-Ch\\'{a}vez, Carlos
2005-01-01
The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the thr...
Process of quantitative evaluation of validity of rock cutting model
Directory of Open Access Journals (Sweden)
Jozef Futó
2012-12-01
Full Text Available Most of complex technical systems, including the rock cutting process, are very difficult to describe mathematically due to limitedhuman recognition abilities depending on achieved state in natural sciences and technology. A confrontation between the conception(model and the real system often arises in the investigation ofrock cutting process. Identification represents determinationof the systembased on its input and output in specified system class in a manner to obtain the determined system equivalent to the exploredsystem. Incase of rock cutting, the qualities of the model derived from aconventional energy theory ofrock cutting are compared to thequalitiesof non-standard models obtained byscanning of the acoustic signal as an accompanying effect of the surroundings in the rock cuttingprocess by calculated characteristics ofthe acoustic signal. The paper focuses on optimization using the specific cutting energy andpossibility of optimization using the accompanying acoustic signal, namely by one of itscharacteristics, i.e. volume of totalsignal Mrepresenting the result of the system identification.
Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.
Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H
2012-01-01
There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.
Exploiting linkage disequilibrium in statistical modelling in quantitative genomics
DEFF Research Database (Denmark)
Wang, Lei
Alleles at two loci are said to be in linkage disequilibrium (LD) when they are correlated or statistically dependent. Genomic prediction and gene mapping rely on the existence of LD between gentic markers and causul variants of complex traits. In the first part of the thesis, a novel method...... to quantify and visualize local variation in LD along chromosomes in describet, and applied to characterize LD patters at the local and genome-wide scale in three Danish pig breeds. In the second part, different ways of taking LD into account in genomic prediction models are studied. One approach is to use...... the recently proposed antedependence models, which treat neighbouring marker effects as correlated; another approach involves use of haplotype block information derived using the program Beagle. The overall conclusion is that taking LD information into account in genomic prediction models potentially improves...
Quantitative phase-field modeling for wetting phenomena.
Badillo, Arnoldo
2015-03-01
A new phase-field model is developed for studying partial wetting. The introduction of a third phase representing a solid wall allows for the derivation of a new surface tension force that accounts for energy changes at the contact line. In contrast to other multi-phase-field formulations, the present model does not need the introduction of surface energies for the fluid-wall interactions. Instead, all wetting properties are included in a unique parameter known as the equilibrium contact angle θeq. The model requires the solution of a single elliptic phase-field equation, which, coupled to conservation laws for mass and linear momentum, admits the existence of steady and unsteady compact solutions (compactons). The representation of the wall by an additional phase field allows for the study of wetting phenomena on flat, rough, or patterned surfaces in a straightforward manner. The model contains only two free parameters, a measure of interface thickness W and β, which is used in the definition of the mixture viscosity μ=μlϕl+μvϕv+βμlϕw. The former controls the convergence towards the sharp interface limit and the latter the energy dissipation at the contact line. Simulations on rough surfaces show that by taking values for β higher than 1, the model can reproduce, on average, the effects of pinning events of the contact line during its dynamic motion. The model is able to capture, in good agreement with experimental observations, many physical phenomena fundamental to wetting science, such as the wetting transition on micro-structured surfaces and droplet dynamics on solid substrates.
First principles pharmacokinetic modeling: A quantitative study on Cyclosporin
DEFF Research Database (Denmark)
Mošat', Andrej; Lueshen, Eric; Heitzig, Martina
2013-01-01
renal and hepatic clearances, elimination half-life, and mass transfer coefficients, to establish drug biodistribution dynamics in all organs and tissues. This multi-scale model satisfies first principles and conservation of mass, species and momentum.Prediction of organ drug bioaccumulation...... as a function of cardiac output, physiology, pathology or administration route may be possible with the proposed PBPK framework. Successful application of our model-based drug development method may lead to more efficient preclinical trials, accelerated knowledge gain from animal experiments, and shortened time-to-market...
A Link Loss Model for the On-Body Propagation Channel for Binaural Hearing Aids
Chandra, Rohit; Johansson, Anders J.
2013-12-01
Binaural hearing aids communicate with each other through a wireless link for synchronization. A propagation model is needed to estimate the ear-to-ear link loss for such binaural hearing aids. The link loss is a critical parameter in a link budget to decide the sensitivity of the transceiver. In this paper, we have presented a model for the deterministic component of the ear-to-ear link loss. The model takes into account the dominant paths having most of the power of the creeping wave from the transceiver in one ear to the transceiver in other ear and the effect of the protruding part of the outer ear called pinna. Simulations are done to validate the model using in-the-ear (ITE) placement of antennas at 2.45 GHz on two heterogeneous phantoms of different age-group and body size. The model agrees with the simulations. The ear-to-ear link loss between the antennas for the binaural hearing aids in the homogeneous SAM phantom is compared with a heterogeneous phantom. It is found that the absence of the pinna and the lossless shell in the SAM phantom underestimate the link loss. This is verified by the measurements on a phantom where we have included the pinnas fabricated by 3D-printing.
Xin, Q.; Gong, P.; Li, W.
2015-02-01
Modeling vegetation photosynthesis is essential for understanding carbon exchanges between terrestrial ecosystems and the atmosphere. The radiative transfer process within plant canopies is one of the key drivers that regulate canopy photosynthesis. Most vegetation cover consists of discrete plant crowns, of which the physical observation departs from the underlying assumption of a homogenous and uniform medium in classic radiative transfer theory. Here we advance the Geometric Optical Radiative Transfer (GORT) model to simulate photosynthesis activities for discontinuous plant canopies. We separate radiation absorption into two components that are absorbed by sunlit and shaded leaves, and derive analytical solutions by integrating over the canopy layer. To model leaf-level and canopy-level photosynthesis, leaf light absorption is then linked to the biochemical process of gas diffusion through leaf stomata. The canopy gap probability derived from GORT differs from classic radiative transfer theory, especially when the leaf area index is high, due to leaf clumping effects. Tree characteristics such as tree density, crown shape, and canopy length affect leaf clumping and regulate radiation interception. Modeled gross primary production (GPP) for two deciduous forest stands could explain more than 80% of the variance of flux tower measurements at both near hourly and daily time scales. We also demonstrate that the ambient CO2 concentration influences daytime vegetation photosynthesis, which needs to be considered in state-of-the-art biogeochemical models. The proposed model is complementary to classic radiative transfer theory and shows promise in modeling the radiative transfer process and photosynthetic activities over discontinuous forest canopies.
Du, Yi; May, Kimberly; Xu, Wei; Liu, Hongcheng
2012-07-01
The presence of N-linked oligosaccharides in the CH2 domain has a significant impact on the structure, stability, and biological functions of recombinant monoclonal antibodies. The impact is also highly dependent on the specific oligosaccharide structures. The absence of core-fucose has been demonstrated to result in increased binding affinity to Fcγ receptors and, thus, enhanced antibody-dependent cellular cytotoxicity (ADCC). Therefore, a method that can specifically determine the level of oligosaccharides without the core-fucose (afucosylation) is highly desired. In the current study, recombinant monoclonal antibodies and tryptic peptides from the antibodies were digested using endoglycosidases F2 and H, which cleaves the glycosidic bond between the two primary GlcNAc residues. As a result, various oligosaccharides of either complex type or high mannose type that are commonly observed for recombinant monoclonal antibodies are converted to either GlcNAc residue only or GlcNAc with the core-fucose. The level of GlcNAc represents the sum of all afucosylated oligosaccharides, whereas the level of GlcNAc with the core-fucose represents the sum of all fucosylated oligosaccharides. LC-MS analysis of the enzymatically digested antibodies after reduction provided a quick estimate of the levels of afucosylation. An accurate determination of the level of afucosylation was obtained by LC-MS analysis of glycopeptides after trypsin digestion.
Quinn, Conrad P; Semenova, Vera A; Elie, Cheryl M; Romero-Steiner, Sandra; Greene, Carolyn; Li, Han; Stamey, Karen; Steward-Clark, Evelene; Schmidt, Daniel S; Mothershed, Elizabeth; Pruckler, Janet; Schwartz, Stephanie; Benson, Robert F; Helsel, Leta O; Holder, Patricia F; Johnson, Scott E; Kellum, Molly; Messmer, Trudy; Thacker, W Lanier; Besser, Lilah; Plikaytis, Brian D; Taylor, Thomas H; Freeman, Alison E; Wallace, Kelly J; Dull, Peter; Sejvar, Jim; Bruce, Erica; Moreno, Rosa; Schuchat, Anne; Lingappa, Jairam R; Martin, Sandra K; Walls, John; Bronsdon, Melinda; Carlone, George M; Bajani-Ari, Mary; Ashford, David A; Stephens, David S; Perkins, Bradley A
2002-10-01
The bioterrorism-associated human anthrax epidemic in the fall of 2001 highlighted the need for a sensitive, reproducible, and specific laboratory test for the confirmatory diagnosis of human anthrax. The Centers for Disease Control and Prevention developed, optimized, and rapidly qualified an enzyme-linked immunosorbent assay (ELISA) for immunoglobulin G (IgG) antibodies to Bacillus anthracis protective antigen (PA) in human serum. The qualified ELISA had a minimum detection limit of 0.06 micro g/mL, a reliable lower limit of detection of 0.09 micro g/mL, and a lower limit of quantification in undiluted serum specimens of 3.0 micro g/mL anti-PA IgG. The diagnostic sensitivity of the assay was 97.8%, and the diagnostic specificity was 97.6%. A competitive inhibition anti-PA IgG ELISA was also developed to enhance diagnostic specificity to 100%. The anti-PA ELISAs proved valuable for the confirmation of cases of cutaneous and inhalational anthrax and evaluation of patients in whom the diagnosis of anthrax was being considered.
Zhang, Shiwei; Lai, Xintian; Liu, Xiaoqing; Li, Yun; Li, Bifang; Huang, Xiuli; Zhang, Qinlei; Chen, Wei; Lin, Lin; Yang, Guowu
2013-01-01
The article presents a sandwich enzyme linked immunosorbent assay (ELISA) for identification of edible bird's nest. The characteristic sialoglycoproteins were found by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and purified by liquid-phase isoelectric focusing (LIEF). According to the analysis, the molecular weight was 106-128 kDa and the isoelectric point was ≤pH 3.0. Two anti-characteristic sialoglycoprotein monoclonal antibodies were produced. The monoclonal antibodies were examined by western-blot assay. One of the monoclonal antibody was used as coating and the other as the enzyme-labeled antibody after being coupled to horseradish peroxidase (HRP). Based on the optimized ELISA condition, the method was established with IC(50) of 1.5 ng/mL, and low cross-reactivity with various fake materials (ELISA provided a suitable means for screening of a large number of samples. The coefficients of variation were between 2.9% and 5.8%.
Energy Technology Data Exchange (ETDEWEB)
Duggirala, R.; Stern, M.P.; Reinhart, L.J. [Univ. of Texas Health Science Center, San Antonio, TX (United States)] [and others
1996-09-01
Despite the evidence that human obesity has strong genetic determinants, efforts at identifying specific genes that influence human obesity have largely been unsuccessful. Using the sibship data obtained from 32 low-income Mexican American pedigrees ascertained on a type II diabetic proband and a multipoint variance-components method, we tested for linkage between various obesity-related traits plus associated metabolic traits and 15 markers on human chromosome 7. We found evidence for linkage between markers in the OB gene region and various traits, as follows: D7S514 and extremity skinfolds (LOD = 3.1), human carboxypeptidase A1 (HCPA1) and 32,33-split proinsulin level (LOD = 4.2), and HCPA1 and proinsulin level (LOD = 3.2). A putative susceptibility locus linked to the marker D7S514 explained 56% of the total phenotypic variation in extremity skinfolds. Variation at the HCPA1 locus explained 64% of phenotypic variation in proinsulin level and {approximately}73% of phenotypic variation in split proinsulin concentration, respectively. Weaker evidence for linkage to several other obesity-related traits (e.g., waist circumference, body-mass index, fat mass by bioimpedance, etc.) was observed for a genetic location, which is {approximately}15 cM telomeric to OB. In conclusion, our study reveals that the OB region plays a significant role in determining the phenotypic variation of both insulin precursors and obesity-related traits, at least in Mexican Americans. 66 refs., 3 figs., 4 tabs.
Linking Experimental Characterization and Computational Modeling in Microstructural Evolution
Energy Technology Data Exchange (ETDEWEB)
Demirel, Melik Cumhar [Univ. of Pittsburgh, PA (United States)
2002-06-01
It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity between grain growth experiments and anisotropic three-dimensional simulations.
Linking Experimental Characterization and Computational Modeling in Microstructural Evolution
Energy Technology Data Exchange (ETDEWEB)
Demirel, Melik Cumhur [Univ. of Pittsburgh, PA (United States)
2002-06-01
It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity between grain growth experiments and anisotropic three-dimensional simulations.
Linking Experimental Characterization and Computational Modeling in Microstructural Evolution
Energy Technology Data Exchange (ETDEWEB)
Demirel, Melik Cumhur [Univ. of California, Berkeley, CA (United States)
2002-06-01
It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity
A quantitative magnetospheric model derived from spacecraft magnetometer data
Mead, G. D.; Fairfield, D. H.
1975-01-01
The model is derived by making least squares fits to magnetic field measurements from four Imp satellites. It includes four sets of coefficients, representing different degrees of magnetic disturbance as determined by the range of Kp values. The data are fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the effects of seasonal north-south asymmetries are contained. The expansion is divergence-free, but unlike the usual scalar potential expansion, the model contains a nonzero curl representing currents distributed within the magnetosphere. The latitude at the earth separating open polar cap field lines from field lines closing on the day side is about 5 deg lower than that determined by previous theoretically derived models. At times of high Kp, additional high-latitude field lines extend back into the tail. Near solstice, the separation latitude can be as low as 75 deg in the winter hemisphere. The average northward component of the external field is much smaller than that predicted by theoretical models; this finding indicates the important effects of distributed currents in the magnetosphere.
Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.
Richards, Jef I.; Preston, Ivan L.
Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…
Essays on Quantitative Marketing Models and Monte Carlo Integration Methods
R.D. van Oest (Rutger)
2005-01-01
textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for addr
Bordon, Jure; Moskon, Miha; Zimic, Nikolaj; Mraz, Miha
2015-01-01
Quantitative modelling of biological systems has become an indispensable computational approach in the design of novel and analysis of existing biological systems. However, kinetic data that describe the system's dynamics need to be known in order to obtain relevant results with the conventional modelling techniques. These data are often hard or even impossible to obtain. Here, we present a quantitative fuzzy logic modelling approach that is able to cope with unknown kinetic data and thus produce relevant results even though kinetic data are incomplete or only vaguely defined. Moreover, the approach can be used in the combination with the existing state-of-the-art quantitative modelling techniques only in certain parts of the system, i.e., where kinetic data are missing. The case study of the approach proposed here is performed on the model of three-gene repressilator.
Linking Fish Habitat Modelling and Sediment Transport in Running Waters
Institute of Scientific and Technical Information of China (English)
Andreas; EISNER; Silke; WIEPRECHT; Matthias; SCHNEIDER
2005-01-01
The assessment of ecological status for running waters is one of the major issues within an integrated river basin management and plays a key role with respect to the implementation of the European Water Frame- work Directive (WFD).One of the tools supporting the development of sustainable river management is physi- cal habitat modeling,e.g.,for fish,because fish population are one of the most important indicators for the e- colngical integrity of rivers.Within physical habitat models hydromorphological ...
2016-06-01
ENGINEERING METHODOLOGY FOR EMPLOYING ARCHITECTURE IN SYSTEM ANALYSIS: DEVELOPING SIMULATION MODELS USING SYSTEMS MODELING LANGUAGE PRODUCTS TO LINK... ENGINEERING METHODOLOGY FOR EMPLOYING ARCHITECTURE IN SYSTEM ANALYSIS: DEVELOPING SIMULATION MODELS USING SYSTEMS MODELING LANGUAGE PRODUCTS TO LINK...to model-based systems engineering (MBSE) by formally defining an MBSE methodology for employing architecture in system analysis (MEASA) that presents
Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models
Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.
1979-01-01
The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.
Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models
Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.
1979-01-01
The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.
Quantitative comparisons of satellite observations and cloud models
Wang, Fang
Microwave radiation interacts directly with precipitating particles and can therefore be used to compare microphysical properties found in models with those found in nature. Lower frequencies (minimization procedures but produce different CWP and RWP. The similarity in Tb can be attributed to comparable Total Water Path (TWP) between the two retrievals while the disagreement in the microphysics is caused by their different degrees of constraint of the cloud/rain ratio by the observations. This situation occurs frequently and takes up 46.9% in the one month 1D-Var retrievals examined. To attain better constrained cloud/rain ratios and improved retrieval quality, this study suggests the implementation of higher microwave frequency channels in the 1D-Var algorithm. Cloud Resolving Models (CRMs) offer an important pathway to interpret satellite observations of microphysical properties of storms. High frequency microwave brightness temperatures (Tbs) respond to precipitating-sized ice particles and can, therefore, be compared with simulated Tbs at the same frequencies. By clustering the Tb vectors at these frequencies, the scene can be classified into distinct microphysical regimes, in other words, cloud types. The properties for each cloud type in the simulated scene are compared to those in the observation scene to identify the discrepancies in microphysics within that cloud type. A convective storm over the Amazon observed by the Tropical Rainfall Measuring Mission (TRMM) is simulated using the Regional Atmospheric Modeling System (RAMS) in a semi-ideal setting, and four regimes are defined within the scene using cluster analysis: the 'clear sky/thin cirrus' cluster, the 'cloudy' cluster, the 'stratiform anvil' cluster and the 'convective' cluster. The relationship between Tb difference of 37 and 85 GHz and Tb at 85 GHz is found to contain important information of microphysical properties such as hydrometeor species and size distributions. Cluster
Wu, Zujian; Pang, Wei; Coghill, George M
2015-01-01
Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.
First attempts of linking modelling, Postharvest behaviour and Melon Genetics
Tijskens, L.M.M.; Santos, Don N.; Obando-Ulloa, J.M.; Moreno, E.; Schouten, R.E.
2008-01-01
The onset of climacteric is associated with the end of melon fruit shelf-life. The aim of this research was to develop practical and applicable models of fruit ripening changes (hardness, moisture loss) also able to discriminate between climacteric and non-climacteric behaviour. The decrease in firm
Linking Models for Assessing Agricultural Land Use Change
Janssen, S.J.C.; Athanasiadis, I.N.; Bezlepkina, I.; Knapen, M.J.R.; Li, H.; Dominguez, I.P.; Rizzoli, A.E.; Ittersum, van M.K.
2011-01-01
The ex-ante assessment of the likely impacts of policy changes and technological innovations on agriculture can provide insight into policy effects on land use and other resources and inform discussion on the desirability of such changes. Integrated assessment and modeling (IAM) is an approach that
Cross-language linking of news stories on the web using interlingual topic modelling
De Smet, Wim; Moens, Marie-Francine
2009-01-01
We have studied the problem of linking event information across different languages without the use of translation systems or dictionaries. The linking is based on interlingua information obtained through probabilistic topic models trained on comparable corpora written in two languages (in our case English and Dutch). To achieve this goal, we expand the Latent Dirichlet Allocation model to process documents in two languages. We demonstrate the validity of the learned interlingual topics in a...
Links between fluid mechanics and quantum mechanics: a model for information in economics?
Haven, Emmanuel
2016-05-28
This paper tallies the links between fluid mechanics and quantum mechanics, and attempts to show whether those links can aid in beginning to build a formal template which is usable in economics models where time is (a)symmetric and memory is absent or present. An objective of this paper is to contemplate whether those formalisms can allow us to model information in economics in a novel way.
Quantitative Risk Modeling of Fire on the International Space Station
Castillo, Theresa; Haught, Megan
2014-01-01
The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.
Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.
Directory of Open Access Journals (Sweden)
Bradley J Beattie
Full Text Available There has been recent and growing interest in applying Cerenkov radiation (CR for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.
Software applications toward quantitative metabolic flux analysis and modeling.
Dandekar, Thomas; Fieselmann, Astrid; Majeed, Saman; Ahmed, Zeeshan
2014-01-01
Metabolites and their pathways are central for adaptation and survival. Metabolic modeling elucidates in silico all the possible flux pathways (flux balance analysis, FBA) and predicts the actual fluxes under a given situation, further refinement of these models is possible by including experimental isotopologue data. In this review, we initially introduce the key theoretical concepts and different analysis steps in the modeling process before comparing flux calculation and metabolite analysis programs such as C13, BioOpt, COBRA toolbox, Metatool, efmtool, FiatFlux, ReMatch, VANTED, iMAT and YANA. Their respective strengths and limitations are discussed and compared to alternative software. While data analysis of metabolites, calculation of metabolic fluxes, pathways and their condition-specific changes are all possible, we highlight the considerations that need to be taken into account before deciding on a specific software. Current challenges in the field include the computation of large-scale networks (in elementary mode analysis), regulatory interactions and detailed kinetics, and these are discussed in the light of powerful new approaches.
Modeling water quality, temperature, and flow in Link River, south-central Oregon
Sullivan, Annett B.; Rounds, Stewart A.
2016-09-09
The 2.1-km (1.3-mi) Link River connects Upper Klamath Lake to the Klamath River in south-central Oregon. A CE-QUAL-W2 flow and water-quality model of Link River was developed to provide a connection between an existing model of the upper Klamath River and any existing or future models of Upper Klamath Lake. Water-quality sampling at six locations in Link River was done during 2013–15 to support model development and to provide a better understanding of instream biogeochemical processes. The short reach and high velocities in Link River resulted in fast travel times and limited water-quality transformations, except for dissolved oxygen. Reaeration through the reach, especially at the falls in Link River, was particularly important in moderating dissolved oxygen concentrations that at times entered the reach at Link River Dam with marked supersaturation or subsaturation. This reaeration resulted in concentrations closer to saturation downstream at the mouth of Link River.
Bailey, Ajay; Hutter, Inge
2008-01-01
With 3.1 million people estimated to be living with HIV/AIDS in India and 39.5 million people globally, the epidemic has posed academics the challenge of identifying behaviours and their underlying beliefs in the effort to reduce the risk of HIV transmission. The Health Belief Model (HBM) is frequen
An integrative model linking feedback environment and organizational citizenship behavior.
Peng, Jei-Chen; Chiu, Su-Fen
2010-01-01
Past empirical evidence has suggested that a positive supervisor feedback environment may enhance employees' organizational citizenship behavior (OCB). In this study, we aim to extend previous research by proposing and testing an integrative model that examines the mediating processes underlying the relationship between supervisor feedback environment and employee OCB. Data were collected from 259 subordinate-supervisor dyads across a variety of organizations in Taiwan. We used structural equation modeling to test our hypotheses. The results demonstrated that supervisor feedback environment influenced employees' OCB indirectly through (1) both positive affective-cognition and positive attitude (i.e., person-organization fit and organizational commitment), and (2) both negative affective-cognition and negative attitude (i.e., role stressors and job burnout). Theoretical and practical implications are discussed.
Microbial Life in Soil - Linking Biophysical Models with Observations
Or, Dani; Tecon, Robin; Ebrahimi, Ali; Kleyer, Hannah; Ilie, Olga; Wang, Gang
2015-04-01
Microbial life in soil occurs within fragmented aquatic habitats formed in complex pore spaces where motility is restricted to short hydration windows (e.g., following rainfall). The limited range of self-dispersion and physical confinement promote spatial association among trophically interdepended microbial species. Competition and preferences for different nutrient resources and byproducts and their diffusion require high level of spatial organization to sustain the functioning of multispecies communities. We report mechanistic modeling studies of competing multispecies microbial communities grown on hydrated surfaces and within artificial soil aggregates (represented by 3-D pore network). Results show how trophic dependencies and cell-level interactions within patchy diffusion fields promote spatial self-organization of motile microbial cells. The spontaneously forming patterns of segregated, yet coexisting species were robust to spatial heterogeneities and to temporal perturbations (hydration dynamics), and respond primarily to the type of trophic dependencies. Such spatially self-organized consortia may reflect ecological templates that optimize substrate utilization and could form the basic architecture for more permanent surface-attached microbial colonies. Hydration dynamics affect structure and spatial arrangement of aerobic and anaerobic microbial communities and their biogeochemical functions. Experiments with well-characterized artificial soil microbial assemblies grown on porous surfaces provide access to community dynamics during wetting and drying cycles detected through genetic fingerprinting. Experiments for visual observations of spatial associations of tagged bacterial species with known trophic dependencies on model porous surfaces are underway. Biophysical modeling provide a means for predicting hydration-mediated critical separation distances for activation of spatial self-organization. The study provides new modeling and observational tools
Mathematical problem solving, modelling, applications, and links to other subjects
Blum, Werner; Niss, Mogens
1989-01-01
The paper will consist of three parts. In part I we shall present some background considerations which are necessary as a basis for what follows. We shall try to clarify some basic concepts and notions, and we shall collect the most important arguments (and related goals) in favour of problem solving, modelling and applications to other subjects in mathematics instruction. In the main part II we shall review the present state, recent trends, and prospective lines of developm...
Modelling variability in black hole binaries: linking simulations to observations
Ingram, Adam
2011-01-01
Black hole accretion flows show rapid X-ray variability. The Power Spectral Density (PSD) of this is typically fit by a phenomenological model of multiple Lorentzians for both the broad band noise and Quasi-Periodic Oscillations (QPOs). Our previous paper (Ingram & Done 2011) developed the first physical model for the PSD and fit this to observational data. This was based on the same truncated disc/hot inner flow geometry which can explain the correlated properties of the energy spectra. This assumes that the broad band noise is from propagating fluctuations in mass accretion rate within the hot flow, while the QPO is produced by global Lense-Thirring precession of the same hot flow. Here we develop this model, making some significant improvements. Firstly we specify that the viscous frequency (equivalently, surface density) in the hot flow has the same form as that measured from numerical simulations of precessing, tilted accretion flows. Secondly, we refine the statistical techniques which we use to fit...
Spine curve modeling for quantitative analysis of spinal curvature.
Hay, Ori; Hershkovitz, Israel; Rivlin, Ehud
2009-01-01
Spine curvature and posture are important to sustain healthy back. Incorrect spine configuration can add strain to muscles and put stress on the spine, leading to low back pain (LBP). We propose new method for analyzing spine curvature in 3D, using CT imaging. The proposed method is based on two novel concepts: the spine curvature is derived from spinal canal centerline, and evaluation of the curve is carried out against a model based on healthy individuals. We show results of curvature analysis of healthy population, pathological (scoliosis) patients, and patients having nonspecific chronic LBP.
Quantitative Model of microRNA-mRNA interaction
Noorbakhsh, Javad; Lang, Alex; Mehta, Pankaj
2012-02-01
MicroRNAs are short RNA sequences that regulate gene expression and protein translation by binding to mRNA. Experimental data reveals the existence of a threshold linear output of protein based on the expression level of microRNA. To understand this behavior, we propose a mathematical model of the chemical kinetics of the interaction between mRNA and microRNA. Using this model we have been able to quantify the threshold linear behavior. Furthermore, we have studied the effect of internal noise, showing the existence of an intermediary regime where the expression level of mRNA and microRNA has the same order of magnitude. In this crossover regime the mRNA translation becomes sensitive to small changes in the level of microRNA, resulting in large fluctuations in protein levels. Our work shows that chemical kinetics parameters can be quantified by studying protein fluctuations. In the future, studying protein levels and their fluctuations can provide a powerful tool to study the competing endogenous RNA hypothesis (ceRNA), in which mRNA crosstalk occurs due to competition over a limited pool of microRNAs.
A Dual-Process Model of the Alcohol-Behavior Link for Social Drinking
Moss, Antony C.; Albery, Ian P.
2009-01-01
A dual-process model of the alcohol-behavior link is presented, synthesizing 2 of the major social-cognitive approaches: expectancy and myopia theories. Substantial evidence has accrued to support both of these models, and recent neurocognitive models of the effects of alcohol on thought and behavior have provided evidence to support both as well.…
Linking effort and fishing mortality in a mixed fisheries model
DEFF Research Database (Denmark)
Thøgersen, Thomas Talund; Hoff, Ayoe; Frost, Hans Staby
2012-01-01
in fish stocks has led to overcapacity in many fisheries, leading to incentives for overfishing. Recent research has shown that the allocation of effort among fleets can play an important role in mitigating overfishing when the targeting covers a range of species (multi-species—i.e., so-called mixed...... fisheries), while simultaneously optimising the overall economic performance of the fleets. The so-called FcubEcon model, in particular, has elucidated both the biologically and economically optimal method for allocating catches—and thus effort—between fishing fleets, while ensuring that the quotas...
Model analysis of the link between interest rates and crashes
Broga, Kristijonas M.; Viegas, Eduardo; Jensen, Henrik Jeldtoft
2016-09-01
We analyse the effect of distinct levels of interest rates on the stability of the financial network under our modelling framework. We demonstrate that banking failures are likely to emerge early on under sustained high interest rates, and at much later stage-with higher probability-under a sustained low interest rate scenario. Moreover, we demonstrate that those bank failures are of a different nature: high interest rates tend to result in significantly more bankruptcies associated to credit losses whereas lack of liquidity tends to be the primary cause of failures under lower rates.
Directory of Open Access Journals (Sweden)
George Kastellakis
2016-11-01
Full Text Available Memories are believed to be stored in distributed neuronal assemblies through activity-induced changes in synaptic and intrinsic properties. However, the specific mechanisms by which different memories become associated or linked remain a mystery. Here, we develop a simplified, biophysically inspired network model that incorporates multiple plasticity processes and explains linking of information at three different levels: (1 learning of a single associative memory, (2 rescuing of a weak memory when paired with a strong one, and (3 linking of multiple memories across time. By dissecting synaptic from intrinsic plasticity and neuron-wide from dendritically restricted protein capture, the model reveals a simple, unifying principle: linked memories share synaptic clusters within the dendrites of overlapping populations of neurons. The model generates numerous experimentally testable predictions regarding the cellular and sub-cellular properties of memory engrams as well as their spatiotemporal interactions.
A quantitative and dynamic model for plant stem cell regulation.
Directory of Open Access Journals (Sweden)
Florian Geier
Full Text Available Plants maintain pools of totipotent stem cells throughout their entire life. These stem cells are embedded within specialized tissues called meristems, which form the growing points of the organism. The shoot apical meristem of the reference plant Arabidopsis thaliana is subdivided into several distinct domains, which execute diverse biological functions, such as tissue organization, cell-proliferation and differentiation. The number of cells required for growth and organ formation changes over the course of a plants life, while the structure of the meristem remains remarkably constant. Thus, regulatory systems must be in place, which allow for an adaptation of cell proliferation within the shoot apical meristem, while maintaining the organization at the tissue level. To advance our understanding of this dynamic tissue behavior, we measured domain sizes as well as cell division rates of the shoot apical meristem under various environmental conditions, which cause adaptations in meristem size. Based on our results we developed a mathematical model to explain the observed changes by a cell pool size dependent regulation of cell proliferation and differentiation, which is able to correctly predict CLV3 and WUS over-expression phenotypes. While the model shows stem cell homeostasis under constant growth conditions, it predicts a variation in stem cell number under changing conditions. Consistent with our experimental data this behavior is correlated with variations in cell proliferation. Therefore, we investigate different signaling mechanisms, which could stabilize stem cell number despite variations in cell proliferation. Our results shed light onto the dynamic constraints of stem cell pool maintenance in the shoot apical meristem of Arabidopsis in different environmental conditions and developmental states.
A Link Loss Model for the On-body Propagation Channel for Binaural Hearing Aids
Chandra, Rohit
2013-01-01
Binaural hearing aids communicate with each other through a wireless link for synchronization. A propagation model is needed to estimate the ear-to-ear link loss for such binaural hearing aids. The link loss is a critical parameter in a link budget to decide the sensitivity of the transceiver. In this paper, we have presented a model for the deterministic component of the ear-to-ear link loss. The model takes into account the dominant paths having most of the power of the creeping wave from the transceiver in one ear to the transceiver in other ear and the effect of the protruding part of the outer ear called pinna. Simulations are done to validate the model using in-the-ear (ITE) placement of antennas at 2.45 GHz on two heterogeneous phantoms of different age-group and body size. The model agrees with the simulations. The ear-to-ear link loss between the antennas for the binaural hearing aids in the homogeneous SAM phantom is compared with a heterogeneous phantom. It is found that the absence of the pinna an...
Application of non-quantitative modelling in the analysis of a network warfare environment
CSIR Research Space (South Africa)
Veerasamy, N
2008-07-01
Full Text Available of the various interacting components, a model to better understand the complexity in a network warfare environment would be beneficial. Non-quantitative modelling is a useful method to better characterize the field due to the rich ideas that can be generated...
Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van
1999-01-01
A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by
Quantitative hardware prediction modeling for hardware/software co-design
Meeuws, R.J.
2012-01-01
Hardware estimation is an important factor in Hardware/Software Co-design. In this dissertation, we present the Quipu Modeling Approach, a high-level quantitative prediction model for HW/SW Partitioning using statistical methods. Our approach uses linear regression between software complexity metric
Due to a shortage of available phosphorus (P) loss data sets, simulated data from a quantitative P transport model could be used to evaluate a P-index. However, the model would need to accurately predict the P loss data sets that are available. The objective of this study was to compare predictions ...
Douma, J.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Roques, A.; Werf, van der W.
2015-01-01
The aim of this report is to provide EFSA with probabilistic models for quantitative pathway analysis of plant pest introduction for the EU territory through non-edible plant products or plants. We first provide a conceptualization of two types of pathway models. The individual based PM simulates an
A quantitative analysis to objectively appraise drought indicators and model drought impacts
Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.
2016-07-01
coverage. The predictions also provided insights into the EDII, in particular highlighting drought events where missing impact reports may reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis. It highlights the important role that quantitative analysis with impact data can have in providing "ground truth" for drought indicators, alongside more traditional stakeholder-led approaches.
A quantitative analysis to objectively appraise drought indicators and model drought impacts
Directory of Open Access Journals (Sweden)
S. Bachmair
2015-09-01
. The predictions also provided insights into the EDII, in particular highlighting drought events where missing impact reports reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis, and highlights the important role that quantitative analysis with impacts data can have in providing "ground truth" for drought indicators alongside more traditional stakeholder-led approaches.
A quantitative analysis to objectively appraise drought indicators and model drought impacts
Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.
2015-09-01
also provided insights into the EDII, in particular highlighting drought events where missing impact reports reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis, and highlights the important role that quantitative analysis with impacts data can have in providing "ground truth" for drought indicators alongside more traditional stakeholder-led approaches.
A cell-based model system links chromothripsis with hyperploidy
DEFF Research Database (Denmark)
Mardin, Balca R; Drainas, Alexandros P; Waszak, Sebastian M;
2015-01-01
A remarkable observation emerging from recent cancer genome analyses is the identification of chromothripsis as a one-off genomic catastrophe, resulting in massive somatic DNA structural rearrangements (SRs). Largely due to lack of suitable model systems, the mechanistic basis of chromothripsis has...... remained elusive. We developed an integrative method termed "complex alterations after selection and transformation (CAST)," enabling efficient in vitro generation of complex DNA rearrangements including chromothripsis, using cell perturbations coupled with a strong selection barrier followed by massively...... parallel sequencing. We employed this methodology to characterize catastrophic SR formation processes, their temporal sequence, and their impact on gene expression and cell division. Our in vitro system uncovered a propensity of chromothripsis to occur in cells with damaged telomeres, and in particular...
Energy Technology Data Exchange (ETDEWEB)
Chen, Baiyang, E-mail: poplar_chen@hotmail.com [Harbin Institute of Technology Shenzhen Graduate School, Shenzhen Key Laboratory of Water Resource Utilization and Environmental Pollution Control, Shenzhen 518055 (China); Zhang, Tian [Harbin Institute of Technology Shenzhen Graduate School, Shenzhen Key Laboratory of Water Resource Utilization and Environmental Pollution Control, Shenzhen 518055 (China); Bond, Tom [Department of Civil and Environmental Engineering, Imperial College, London SW7 2AZ (United Kingdom); Gan, Yiqun [Harbin Institute of Technology Shenzhen Graduate School, Shenzhen Key Laboratory of Water Resource Utilization and Environmental Pollution Control, Shenzhen 518055 (China)
2015-12-15
Quantitative structure–activity relationship (QSAR) models are tools for linking chemical activities with molecular structures and compositions. Due to the concern about the proliferating number of disinfection byproducts (DBPs) in water and the associated financial and technical burden, researchers have recently begun to develop QSAR models to investigate the toxicity, formation, property, and removal of DBPs. However, there are no standard procedures or best practices regarding how to develop QSAR models, which potentially limit their wide acceptance. In order to facilitate more frequent use of QSAR models in future DBP research, this article reviews the processes required for QSAR model development, summarizes recent trends in QSAR-DBP studies, and shares some important resources for QSAR development (e.g., free databases and QSAR programs). The paper follows the four steps of QSAR model development, i.e., data collection, descriptor filtration, algorithm selection, and model validation; and finishes by highlighting several research needs. Because QSAR models may have an important role in progressing our understanding of DBP issues, it is hoped that this paper will encourage their future use for this application.
Chen, Baiyang; Zhang, Tian; Bond, Tom; Gan, Yiqun
2015-12-15
Quantitative structure-activity relationship (QSAR) models are tools for linking chemical activities with molecular structures and compositions. Due to the concern about the proliferating number of disinfection byproducts (DBPs) in water and the associated financial and technical burden, researchers have recently begun to develop QSAR models to investigate the toxicity, formation, property, and removal of DBPs. However, there are no standard procedures or best practices regarding how to develop QSAR models, which potentially limit their wide acceptance. In order to facilitate more frequent use of QSAR models in future DBP research, this article reviews the processes required for QSAR model development, summarizes recent trends in QSAR-DBP studies, and shares some important resources for QSAR development (e.g., free databases and QSAR programs). The paper follows the four steps of QSAR model development, i.e., data collection, descriptor filtration, algorithm selection, and model validation; and finishes by highlighting several research needs. Because QSAR models may have an important role in progressing our understanding of DBP issues, it is hoped that this paper will encourage their future use for this application.
Ball, R D
2001-11-01
We describe an approximate method for the analysis of quantitative trait loci (QTL) based on model selection from multiple regression models with trait values regressed on marker genotypes, using a modification of the easily calculated Bayesian information criterion to estimate the posterior probability of models with various subsets of markers as variables. The BIC-delta criterion, with the parameter delta increasing the penalty for additional variables in a model, is further modified to incorporate prior information, and missing values are handled by multiple imputation. Marginal probabilities for model sizes are calculated, and the posterior probability of nonzero model size is interpreted as the posterior probability of existence of a QTL linked to one or more markers. The method is demonstrated on analysis of associations between wood density and markers on two linkage groups in Pinus radiata. Selection bias, which is the bias that results from using the same data to both select the variables in a model and estimate the coefficients, is shown to be a problem for commonly used non-Bayesian methods for QTL mapping, which do not average over alternative possible models that are consistent with the data.
High-response piezoelectricity modeled quantitatively near a phase boundary
Newns, Dennis M.; Kuroda, Marcelo A.; Cipcigan, Flaviu S.; Crain, Jason; Martyna, Glenn J.
2017-01-01
Interconversion of mechanical and electrical energy via the piezoelectric effect is fundamental to a wide range of technologies. The discovery in the 1990s of giant piezoelectric responses in certain materials has therefore opened new application spaces, but the origin of these properties remains a challenge to our understanding. A key role is played by the presence of a structural instability in these materials at compositions near the "morphotropic phase boundary" (MPB) where the crystal structure changes abruptly and the electromechanical responses are maximal. Here we formulate a simple, unified theoretical description which accounts for extreme piezoelectric response, its observation at compositions near the MPB, accompanied by ultrahigh dielectric constant and mechanical compliances with rather large anisotropies. The resulting model, based upon a Landau free energy expression, is capable of treating the important domain engineered materials and is found to be predictive while maintaining simplicity. It therefore offers a general and powerful means of accounting for the full set of signature characteristics in these functional materials including volume conserving sum rules and strong substrate clamping effects.
A quantitative confidence signal detection model: 1. Fitting psychometric functions.
Yi, Yongwoo; Merfeld, Daniel M
2016-04-01
Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. Copyright © 2016 the American Physiological Society.
Toward a quantitative model of metamorphic nucleation and growth
Gaidies, F.; Pattison, D. R. M.; de Capitani, C.
2011-11-01
The formation of metamorphic garnet during isobaric heating is simulated on the basis of the classical nucleation and reaction rate theories and Gibbs free energy dissipation in a multi-component model system. The relative influences are studied of interfacial energy, chemical mobility at the surface of garnet clusters, heating rate and pressure on interface-controlled garnet nucleation and growth kinetics. It is found that the interfacial energy controls the departure from equilibrium required to nucleate garnet if attachment and detachment processes at the surface of garnet limit the overall crystallization rate. The interfacial energy for nucleation of garnet in a metapelite of the aureole of the Nelson Batholith, BC, is estimated to range between 0.03 and 0.3 J/m2 at a pressure of ca. 3,500 bar. This corresponds to a thermal overstep of the garnet-forming reaction of ca. 30°C. The influence of the heating rate on thermal overstepping is negligible. A significant feedback is predicted between chemical fractionation associated with garnet formation and the kinetics of nucleation and crystal growth of garnet giving rise to its lognormal—shaped crystal size distribution.
Sarwal, Aarti; Cartwright, Michael S.; Walker, Francis O.; Mitchell, Erin; Buj-Bello, Anna; Beggs, Alan H.; Childers, Martin K.
2014-01-01
Introduction We tested the feasibility of using neuromuscular ultrasound for non-invasive real time assessment of diaphragmatic structure and function in a canine model of X-Linked Myotubular Myopathy (XLMTM). Methods Ultrasound images in 3 dogs (Wild Type WT, n=1; XLMTM untreated, n=1; XLMTM post AAV8-mediated MTM1 gene replacement, n=1) were analyzed for diaphragm thickness, change in thickness with respiration, muscle echogenicity, and diaphragm excursion amplitude during spontaneous breathing. Results Quantitative parameters of diaphragm structure were different among the animals. WT diaphragm was thicker and less echogenic than the XLMTM control, whereas the diaphragm measurements of the MTM1-treated XLMTM dog were comparable to the WT dog. Discussion This pilot demonstrates the feasibility of using ultrasound for quantitative assessment of the diaphragm in a canine model. Ultrasonography may potentially replace invasive measures of diaphragm function in canine models and in humans in the future, for non-invasive respiratory monitoring and evaluation of neuromuscular disease. PMID:24861988
Impact of implementation choices on quantitative predictions of cell-based computational models
Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.
2017-09-01
'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.
An analysis of single-index model with monotonic link function
Institute of Scientific and Technical Information of China (English)
ZHU Li-ping; YANG Xiao-yan; YU Zhou; LIU Xiang-rong
2008-01-01
The single-index model with monotonic link function is investigated. Firstly,it is showed that the link function h(·) can be viewed by a graphic method. That is,the plot with the fitted response y on the horizontal axis and the observed y on the vertical axis can be used to visualize the link function. It is pointed out that this graphic approach is also applicable even when the link function is not monotonic. Note that many existing nonparametric smoothers can also be used to assess h(·). Therefore,the I-spline approximation of the link function via maximizing the covariance function with a penalty function is investigated in the present work.The consistency of the criterion is constructed. A small simulation is carried out to evidence the efficiency of the approach proposed in the paper.
A geometric approach to modeling of four- and five-link planar snake-like robot
Directory of Open Access Journals (Sweden)
Tomáš Lipták
2016-10-01
Full Text Available The article deals with the issue of use of geometric mechanics tools in modelling nonholonomic systems. The introductory part of the article contains fiber bundle theory that we use at creating mathematical model of nonholonomic locomotion system with undulatory movement. Further the determination of general mathematical model for n-link snake-like robot is presented, where we used nonholonomic constraints. The relation between changes of shape and position variables was expressed using the local connection that was used to analyze and control system movement by vector fields. The effect of links number of snake-like robot on its mathematical model was investigated. The last part of this article consists of detailed description of modeling reconstruction equation for four- and five-link snake-like robot.
Modeling and Representing National Climate Assessment Information using Linked Data
Zheng, J.; Tilmes, C.; Smith, A.; Zednik, S.; Fox, P. A.
2012-12-01
Every four years, earth scientists work together on a National Climate Assessment (NCA) report which integrates, evaluates, and interprets the findings of climate change and impacts on affected industries such as agriculture, natural environment, energy production and use, etc. Given the amount of information presented in each report, and the wide range of information sources and topics, it can be difficult for users to find and identify desired information. To ease the user effort of information discovery, well-structured metadata is needed that describes the report's key statements and conclusions and provide for traceable provenance of data sources used. We present an assessment ontology developed to describe terms, concepts and relations required for the NCA metadata. Wherever possible, the assessment ontology reuses terms from well-known ontologies such as Semantic Web for Earth and Environmental Terminology (SWEET) ontology, Dublin Core (DC) vocabulary. We have generated sample National Climate Assessment metadata conforming to our assessment ontology and publicly exposed via a SPARQL-endpoint and website. We have also modeled provenance information for the NCA writing activities using the W3C recommendation-candidate PROV-O ontology. Using this provenance the user will be able to trace the sources of information used in the assessment and therefore make trust decisions. In the future, we are planning to implement a faceted browser over the metadata to enhance metadata traversal and information discovery.
Directory of Open Access Journals (Sweden)
Q. Xin
2015-02-01
Full Text Available Modeling vegetation photosynthesis is essential for understanding carbon exchanges between terrestrial ecosystems and the atmosphere. The radiative transfer process within plant canopies is one of the key drivers that regulate canopy photosynthesis. Most vegetation cover consists of discrete plant crowns, of which the physical observation departs from the underlying assumption of a homogenous and uniform medium in classic radiative transfer theory. Here we advance the Geometric Optical Radiative Transfer (GORT model to simulate photosynthesis activities for discontinuous plant canopies. We separate radiation absorption into two components that are absorbed by sunlit and shaded leaves, and derive analytical solutions by integrating over the canopy layer. To model leaf-level and canopy-level photosynthesis, leaf light absorption is then linked to the biochemical process of gas diffusion through leaf stomata. The canopy gap probability derived from GORT differs from classic radiative transfer theory, especially when the leaf area index is high, due to leaf clumping effects. Tree characteristics such as tree density, crown shape, and canopy length affect leaf clumping and regulate radiation interception. Modeled gross primary production (GPP for two deciduous forest stands could explain more than 80% of the variance of flux tower measurements at both near hourly and daily time scales. We also demonstrate that the ambient CO2 concentration influences daytime vegetation photosynthesis, which needs to be considered in state-of-the-art biogeochemical models. The proposed model is complementary to classic radiative transfer theory and shows promise in modeling the radiative transfer process and photosynthetic activities over discontinuous forest canopies.
Simulating the link between ENSO and summer drought in Southern Africa using regional climate models
Meque, Arlindo; Abiodun, Babatunde J.
2015-04-01
This study evaluates the capability of regional climate models (RCMs) in simulating the link between El Niño Southern Oscillation (ENSO) and Southern African droughts. It uses the Standardized Precipitation-Evapotranspiration Index (SPEI, computed using rainfall and temperature data) to identify 3-month drought over Southern Africa, and compares the observed and simulated correlation between ENSO and SPEI. The observation data are from the Climate Research Unit, while the simulation data are from ten RCMs (ARPEGE, CCLM, HIRHAM, RACMO, REMO, PRECIS, RegCM3, RCA, WRF, and CRCM) that participated in the regional climate downscaling experiment (CORDEX) project. The study analysed the rainy season (December-February) data for 19 years (1989-2008). The results show a strong link between ENSO and droughts (SPEI) over Southern Africa. The link is owing to the influence of ENSO on both rainfall and temperature fields, but the correlation between ENSO and temperature is stronger than the correlation between ENSO and rainfall. Hence, using only rainfall to monitor droughts in Southern Africa may underestimate the influence of ENSO on the droughts. Only few CORDEX RCMs simulate the influence of ENSO on Southern African drought as observed. In this regard, the ARPEGE model shows the best simulation, while CRCM shows the worst. The different in the performance may be due to their lateral boundary conditions. The RCA-simulated link between ENSO and Southern African droughts is sensitive to the global dataset used as the lateral boundary conditions. In some cases, using RCA to downscale global circulation models (GCM) simulations adds value to the simulated link between ENSO and the droughts, but in other cases the downscaling adds no value to the link. The added value of RCA to the simulated link decreases as the capability of the GCM to simulate the link increases. This study suggests that downscaling GCM simulations with RCMs over Southern Africa may improve or depreciate the
Leandro, Jorge; Martins, Ricardo
2016-01-01
Pluvial flooding in urban areas is characterized by a gradually varying inundation process caused by surcharge of the sewer manholes. Therefore urban flood models need to simulate the interaction between the sewer network and the overland flow in order to accurately predict the flood inundation extents. In this work we present a methodology for linking 2D overland flow models with the storm sewer model SWMM 5. SWMM 5 is a well-known free open-source code originally developed in 1971. The latest major release saw its structure re-written in C ++ allowing it to be compiled as a command line executable or through a series of calls made to function inside a dynamic link library (DLL). The methodology developed herein is written inside the same DLL in C + +, and is able to simulate the bi-directional interaction between both models during simulation. Validation is done in a real case study with an existing urban flood coupled model. The novelty herein is that the new methodology can be added to SWMM without the need for editing SWMM's original code. Furthermore, it is directly applicable to other coupled overland flow models aiming to use SWMM 5 as the sewer network model.
Neuro-Sliding-Mode Control of Flexible-Link Manipulators Based on Singularly Perturbed Model
Institute of Scientific and Technical Information of China (English)
ZHANG Yu; YANG Tangwen; SUN Zengqi
2009-01-01
A neuro-sliding-mode control (NSMC) strategy was developed to handle the complex nonlinear dynamics and model uncertainties of flexible-link manipulators. A composite controller was designed based on a singularly perturbed model of flexible-link manipulators when the rigid motion and flexible motion are decoupled. The NSMC is employed to control the slow subsystem to track a desired trajectory with a traditional sliding mode controller to stabilize the fast subsystem which represents the link vibrations. A stability analysis of the flexible modes is also given. Simulations confirm that the NSMC performs better than the tra-ditional sliding-mode control for controlling flexible-link manipulators. The control strategy not only gives good tracking performance for the joint angle, but also effectively suppresses endpoint vibrations. The simulations also show that the control strategy has a strong self-adaptive ability for controlling manipulators with different parameters.
The Modeling and Simulating of Link-16 Based on QualNet%基于QualNet的Link-16建模与仿真
Institute of Scientific and Technical Information of China (English)
禹华钢; 周安栋; 刘宏波
2008-01-01
针对数据链Link-16的技术特性,参考战术数据链参考模型(Tactical Data Link Reference Model,TDLRM),设计了适合Link-16的协议体系模型.搭建了基于QualNet仿真平台的实际军事网络场景.对Link-16模型进行了模拟仿真,并对仿真结果中接收消息字数这一参数进行了统计分析.该仿真模型可分析各种情况下的Link-16的性能参数,为深入研究Link-16提供参考.
A quantitative quantum chemical model of the Dewar-Knott color rule for cationic diarylmethanes
Olsen, Seth
2012-04-01
We document the quantitative manifestation of the Dewar-Knott color rule in a four-electron, three-orbital state-averaged complete active space self-consistent field (SA-CASSCF) model of a series of bridge-substituted cationic diarylmethanes. We show that the lowest excitation energies calculated using multireference perturbation theory based on the model are linearly correlated with the development of hole density in an orbital localized on the bridge, and the depletion of pair density in the same orbital. We quantitatively express the correlation in the form of a generalized Hammett equation.
Simulation Model of the Future Nordic Power Grid Considering the Impact of HVDC Links
Aas, Even Strand
2016-01-01
As Europe is shifting to an increasingly larger share of non-dispatchable renewable energy sources, the cross-border power flow changes. This thesis considers further development of an existing PowerFactory simulation model designed to fit with new power flow situations influencing the Nordic power system. Today, there are many HVDC links connecting Europe to the Nordic grid, and there are several new links being built and planned. The thesis work is a continuation of an earlier specialisatio...
Instantaneous thermal modeling of the DC-link capacitor in PhotoVoltaic systems
DEFF Research Database (Denmark)
Yang, Yongheng; Ma, Ke; Wang, Huai
2015-01-01
Capacitors have been witnessed as one of the weak points in grid-connected PhotoVoltaic (PV) applications, and thus efforts have been devoted to the design of reliable DC-link capacitors in PV applications. Since the hot-spot temperature of the capacitor is one of the failure inducers......, instantaneous thermal modeling approaches considering mission profiles for the DC-link capacitor in single-phase PV systems are explored in this paper. These thermal modelling approaches are based on: a) fast Fourier transform, b) look-up tables, and c) ripple current reconstruction. Moreover, the thermal...... modelling approaches for the DC-link capacitors take into account the instantaneous thermal characteristics, which are more challenging to the capacitor reliability during operation. Such instantaneous thermal modeling approaches enable a translation of instantaneous capacitor power losses to capacitor...
A quantitative model of human DNA base excision repair. I. mechanistic insights
Sokhansanj, Bahrad A.; Rodrigue, Garry R.; Fitch, J. Patrick; David M Wilson
2002-01-01
Base excision repair (BER) is a multistep process involving the sequential activity of several proteins that cope with spontaneous and environmentally induced mutagenic and cytotoxic DNA damage. Quantitative kinetic data on single proteins of BER have been used here to develop a mathematical model of the BER pathway. This model was then employed to evaluate mechanistic issues and to determine the sensitivity of pathway throughput to altered enzyme kinetics. Notably, the model predicts conside...
A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon
Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.
2017-01-01
The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.
A Simple Forecasting Model Linking Macroeconomic Policy to Industrial Employment Demand.
Malley, James R.; Hady, Thomas F.
A study detailed further a model linking monetary and fiscal policy to industrial employment in metropolitan and nonmetropolitan areas of four United States regions. The model was used to simulate the impacts on area and regional employment of three events in the economy: changing real gross national product (GNP) via monetary policy, holding the…
Roesthuis, Roy; Misra, Sarthak
2016-01-01
Accurate closed-loop control of continuum manipulators requires integration of both models that describe their motion and methods to evaluate manipulator shape. This work presents a model that approximates the continuous shape of a continuum manipulator by a serial chain of rigid links, connected by
Analysis of a generic model for a bottleneck link in an integrated services communications network
Al-Begain, K.; Heindl, A.; Telek, M.; Litjens, R.; Boucherie, R.J.
2007-01-01
We develop and analyse a generic model for performance evaluation, parameter optimisation and dimensioning of a bottleneck link in an integrated services communications network. Possible application areas include ip, atm and gsm/gprs networks. The model enables analytical evaluation for a scenario o
Low-Energy Effective Theories of Quantum Link and Quantum Spin Models
Schlittgen, B
2001-01-01
Quantum spin and quantum link models provide an unconventional regularization of field theory in which classical fields arise via dimensional reduction of discrete variables. This D-theory regularization leads to the same continuum theories as the conventional approach. We show this by deriving the low-energy effective Lagrangians of D-theory models using coherent state path integral techniques. We illustrate our method for the $(2+1)$-d Heisenberg quantum spin model which is the D-theory regularization of the 2-d O(3) model. Similarly, we prove that in the continuum limit a $(2+1)$-d quantum spin model with $SU(N)_L\\times SU(N)_R\\times U(1)_{L=R}$ symmetry is equivalent to the 2-d principal chiral model. Finally, we show that $(4+1)$-d SU(N) quantum link models reduce to ordinary 4-d Yang-Mills theory.
Quantitative photoacoustic tomography using forward and adjoint Monte Carlo models of radiance
Hochuli, Roman; Arridge, Simon; Cox, Ben
2016-01-01
Forward and adjoint Monte Carlo (MC) models of radiance are proposed for use in model-based quantitative photoacoustic tomography. A 2D radiance MC model using a harmonic angular basis is introduced and validated against analytic solutions for the radiance in heterogeneous media. A gradient-based optimisation scheme is then used to recover 2D absorption and scattering coefficients distributions from simulated photoacoustic measurements. It is shown that the functional gradients, which are a challenge to compute efficiently using MC models, can be calculated directly from the coefficients of the harmonic angular basis used in the forward and adjoint models. This work establishes a framework for transport-based quantitative photoacoustic tomography that can fully exploit emerging highly parallel computing architectures.
Cumulative t-link threshold models for the genetic analysis of calving ease scores
Directory of Open Access Journals (Sweden)
Tempelman Robert J
2003-09-01
Full Text Available Abstract In this study, a hierarchical threshold mixed model based on a cumulative t-link specification for the analysis of ordinal data or more, specifically, calving ease scores, was developed. The validation of this model and the Markov chain Monte Carlo (MCMC algorithm was carried out on simulated data from normally and t4 (i.e. a t-distribution with four degrees of freedom distributed populations using the deviance information criterion (DIC and a pseudo Bayes factor (PBF measure to validate recently proposed model choice criteria. The simulation study indicated that although inference on the degrees of freedom parameter is possible, MCMC mixing was problematic. Nevertheless, the DIC and PBF were validated to be satisfactory measures of model fit to data. A sire and maternal grandsire cumulative t-link model was applied to a calving ease dataset from 8847 Italian Piemontese first parity dams. The cumulative t-link model was shown to lead to posterior means of direct and maternal heritabilities (0.40 ± 0.06, 0.11 ± 0.04 and a direct maternal genetic correlation (-0.58 ± 0.15 that were not different from the corresponding posterior means of the heritabilities (0.42 ± 0.07, 0.14 ± 0.04 and the genetic correlation (-0.55 ± 0.14 inferred under the conventional cumulative probit link threshold model. Furthermore, the correlation (> 0.99 between posterior means of sire progeny merit from the two models suggested no meaningful rerankings. Nevertheless, the cumulative t-link model was decisively chosen as the better fitting model for this calving ease data using DIC and PBF.
Pyramidal Edge Detection Method Based on AWFM Filtering and Fuzzy Linking Model
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
A novel multiresolution pyramidal edge detector, based on adaptive weighted fuzzy mean(AWFM)filtering and fuzzy linking model, is presented in this paper. The algorithm first constructs a pyramidal structure by repetitive AWFM filtering and subsampling of original image. Then it utilizes multiple heuristic linking criteria between the edge nodes of two adjacent levels and considers the linkage as a fuzzy model, which is trained offline. Through this fuzzy linking model, the boundaries detected at coarse resolution are propagated and refined to the bottom level from the coarse-to fine edge detection. The validation experiment results demonstrate that the proposed approach has superior performance compared with standard fixed resolution detector andprevious multiresolution approach, especially in impulse noise environment.
Pritzkow, Sandra; Wagenführ, Katja; Daus, Martin L.; Boerner, Susann; Lemmer, Karin; Thomzig, Achim; Mielke, Martin; Beekes, Michael
2011-01-01
Prions are pathogens with an unusually high tolerance to inactivation and constitute a complex challenge to the re-processing of surgical instruments. On the other hand, however, they provide an informative paradigm which has been exploited successfully for the development of novel broad-range disinfectants simultaneously active also against bacteria, viruses and fungi. Here we report on the development of a methodological platform that further facilitates the use of scrapie prions as model pathogens for disinfection. We used specifically adapted serial protein misfolding cyclic amplification (PMCA) for the quantitative detection, on steel wires providing model carriers for decontamination, of 263K scrapie seeding activity converting normal protease-sensitive into abnormal protease-resistant prion protein. Reference steel wires carrying defined amounts of scrapie infectivity were used for assay calibration, while scrapie-contaminated test steel wires were subjected to fifteen different procedures for disinfection that yielded scrapie titre reductions of ≤101- to ≥105.5-fold. As confirmed by titration in hamsters the residual scrapie infectivity on test wires could be reliably deduced for all examined disinfection procedures, from our quantitative seeding activity assay. Furthermore, we found that scrapie seeding activity present in 263K hamster brain homogenate or multiplied by PMCA of scrapie-contaminated steel wires both triggered accumulation of protease-resistant prion protein and was further propagated in a novel cell assay for 263K scrapie prions, i.e., cerebral glial cell cultures from hamsters. The findings from our PMCA- and glial cell culture assays revealed scrapie seeding activity as a biochemically and biologically replicative principle in vitro, with the former being quantitatively linked to prion infectivity detected on steel wires in vivo. When combined, our in vitro assays provide an alternative to titrations of biological scrapie infectivity
Directory of Open Access Journals (Sweden)
Valentina Lo Schiavo
Full Text Available Cell adhesion is mediated by numerous membrane receptors. It is desirable to derive the outcome of a cell-surface encounter from the molecular properties of interacting receptors and ligands. However, conventional parameters such as affinity or kinetic constants are often insufficient to account for receptor efficiency. Avidity is a qualitative concept frequently used to describe biomolecule interactions: this includes incompletely defined properties such as the capacity to form multivalent attachments. The aim of this study is to produce a working description of monovalent attachments formed by a model system, then to measure and interpret the behavior of divalent attachments under force. We investigated attachments between antibody-coated microspheres and surfaces coated with sparse monomeric or dimeric ligands. When bonds were subjected to a pulling force, they exhibited both a force-dependent dissociation consistent with Bell's empirical formula and a force- and time-dependent strengthening well described by a single parameter. Divalent attachments were stronger and less dependent on forces than monovalent ones. The proportion of divalent attachments resisting a force of 30 piconewtons for at least 5 s was 3.7 fold higher than that of monovalent attachments. Quantitative modeling showed that this required rebinding, i.e. additional bond formation between surfaces linked by divalent receptors forming only one bond. Further, experimental data were compatible with but did not require stress sharing between bonds within divalent attachments. Thus many ligand-receptor interactions do not behave as single-step reactions in the millisecond to second timescale. Rather, they exhibit progressive stabilization. This explains the high efficiency of multimerized or clustered receptors even when bonds are only subjected to moderate forces. Our approach provides a quantitative way of relating binding avidity to measurable parameters including bond
Directory of Open Access Journals (Sweden)
Sandra Pritzkow
Full Text Available Prions are pathogens with an unusually high tolerance to inactivation and constitute a complex challenge to the re-processing of surgical instruments. On the other hand, however, they provide an informative paradigm which has been exploited successfully for the development of novel broad-range disinfectants simultaneously active also against bacteria, viruses and fungi. Here we report on the development of a methodological platform that further facilitates the use of scrapie prions as model pathogens for disinfection. We used specifically adapted serial protein misfolding cyclic amplification (PMCA for the quantitative detection, on steel wires providing model carriers for decontamination, of 263K scrapie seeding activity converting normal protease-sensitive into abnormal protease-resistant prion protein. Reference steel wires carrying defined amounts of scrapie infectivity were used for assay calibration, while scrapie-contaminated test steel wires were subjected to fifteen different procedures for disinfection that yielded scrapie titre reductions of ≤10(1- to ≥10(5.5-fold. As confirmed by titration in hamsters the residual scrapie infectivity on test wires could be reliably deduced for all examined disinfection procedures, from our quantitative seeding activity assay. Furthermore, we found that scrapie seeding activity present in 263K hamster brain homogenate or multiplied by PMCA of scrapie-contaminated steel wires both triggered accumulation of protease-resistant prion protein and was further propagated in a novel cell assay for 263K scrapie prions, i.e., cerebral glial cell cultures from hamsters. The findings from our PMCA- and glial cell culture assays revealed scrapie seeding activity as a biochemically and biologically replicative principle in vitro, with the former being quantitatively linked to prion infectivity detected on steel wires in vivo. When combined, our in vitro assays provide an alternative to titrations of biological
Pritzkow, Sandra; Wagenführ, Katja; Daus, Martin L; Boerner, Susann; Lemmer, Karin; Thomzig, Achim; Mielke, Martin; Beekes, Michael
2011-01-01
Prions are pathogens with an unusually high tolerance to inactivation and constitute a complex challenge to the re-processing of surgical instruments. On the other hand, however, they provide an informative paradigm which has been exploited successfully for the development of novel broad-range disinfectants simultaneously active also against bacteria, viruses and fungi. Here we report on the development of a methodological platform that further facilitates the use of scrapie prions as model pathogens for disinfection. We used specifically adapted serial protein misfolding cyclic amplification (PMCA) for the quantitative detection, on steel wires providing model carriers for decontamination, of 263K scrapie seeding activity converting normal protease-sensitive into abnormal protease-resistant prion protein. Reference steel wires carrying defined amounts of scrapie infectivity were used for assay calibration, while scrapie-contaminated test steel wires were subjected to fifteen different procedures for disinfection that yielded scrapie titre reductions of ≤10(1)- to ≥10(5.5)-fold. As confirmed by titration in hamsters the residual scrapie infectivity on test wires could be reliably deduced for all examined disinfection procedures, from our quantitative seeding activity assay. Furthermore, we found that scrapie seeding activity present in 263K hamster brain homogenate or multiplied by PMCA of scrapie-contaminated steel wires both triggered accumulation of protease-resistant prion protein and was further propagated in a novel cell assay for 263K scrapie prions, i.e., cerebral glial cell cultures from hamsters. The findings from our PMCA- and glial cell culture assays revealed scrapie seeding activity as a biochemically and biologically replicative principle in vitro, with the former being quantitatively linked to prion infectivity detected on steel wires in vivo. When combined, our in vitro assays provide an alternative to titrations of biological scrapie
Respizzi, Stefano; Covelli, Elisabetta
2015-01-01
The emotional coaching model uses quantitative and qualitative elements to demonstrate some assumptions relevant to new methods of treatment in physical rehabilitation, considering emotional, cognitive and behavioral aspects in patients, whether or not they are sportsmen. Through quantitative tools (Tampa Kinesiophobia Scale, Emotional Interview Test, Previous Re-Injury Test, and reports on test scores) and qualitative tools (training contracts and relationships of emotional alliance or "contagion"), we investigate initial assumptions regarding: the presence of a cognitive and emotional mental state of impasse in patients at the beginning of the rehabilitation pathway; the curative value of the emotional alliance or "emotional contagion" relationship between healthcare provider and patient; the link between the patient's pathology and type of contact with his own body and emotions; analysis of the psychosocial variables for the prediction of possible cases of re-injury for patients who have undergone or are afraid to undergo reconstruction of the anterior cruciate ligament (ACL). Although this approach is still in the experimental stage, the scores of the administered tests show the possibility of integrating quantitative and qualitative tools to investigate and develop a patient's physical, mental and emotional resources during the course of his rehabilitation. Furthermore, it seems possible to identify many elements characterizing patients likely to undergo episodes of re-injury or to withdraw totally from sporting activity. In particular, such patients are competitive athletes, who fear or have previously undergone ACL reconstruction. The theories referred to (the transactional analysis theory, self-determination theory) and the tools used demonstrate the usefulness of continuing this research in order to build a shared coaching model treatment aimed at all patients, sportspeople or otherwise, which is not only physical but also emotional, cognitive and
Arnold, J.; Gutmann, E. D.; Clark, M. P.; Nijssen, B.; Vano, J. A.; Addor, N.; Wood, A.; Newman, A. J.; Mizukami, N.; Brekke, L. D.; Rasmussen, R.; Mendoza, P. A.
2016-12-01
Climate change narratives for water-resource applications must represent the change signals contextualized by hydroclimatic process variability and uncertainty at multiple scales. Building narratives of plausible change includes assessing uncertainties across GCM structure, internal climate variability, climate downscaling methods, and hydrologic models. Work with this linked modeling chain has dealt mostly with GCM sampling directed separately to either model fidelity (does the model correctly reproduce the physical processes in the world?) or sensitivity (of different model responses to CO2 forcings) or diversity (of model type, structure, and complexity). This leaves unaddressed any interactions among those measures and with other components in the modeling chain used to identify water-resource vulnerabilities to specific climate threats. However, time-sensitive, real-world vulnerability studies typically cannot accommodate a full uncertainty ensemble across the whole modeling chain, so a gap has opened between current scientific knowledge and most routine applications for climate-changed hydrology. To close that gap, the US Army Corps of Engineers, the Bureau of Reclamation, and the National Center for Atmospheric Research are working on techniques to subsample uncertainties objectively across modeling chain components and to integrate results into quantitative hydrologic storylines of climate-changed futures. Importantly, these quantitative storylines are not drawn from a small sample of models or components. Rather, they stem from the more comprehensive characterization of the full uncertainty space for each component. Equally important from the perspective of water-resource practitioners, these quantitative hydrologic storylines are anchored in actual design and operations decisions potentially affected by climate change. This talk will describe part of our work characterizing variability and uncertainty across modeling chain components and their
A Novel Quantitative Analysis Model for Information System Survivability Based on Conflict Analysis
Institute of Scientific and Technical Information of China (English)
WANG Jian; WANG Huiqiang; ZHAO Guosheng
2007-01-01
This paper describes a novel quantitative analysis model for system survivability based on conflict analysis, which provides a direct-viewing survivable situation. Based on the three-dimensional state space of conflict, each player's efficiency matrix on its credible motion set can be obtained. The player whose desire is the strongest in all initiates the moving and the overall state transition matrix of information system may be achieved. In addition, the process of modeling and stability analysis of conflict can be converted into a Markov analysis process, thus the obtained results with occurring probability of each feasible situation will help the players to quantitatively judge the probability of their pursuing situations in conflict. Compared with the existing methods which are limited to post-explanation of system's survivable situation, the proposed model is relatively suitable for quantitatively analyzing and forecasting the future development situation of system survivability. The experimental results show that the model may be effectively applied to quantitative analysis for survivability. Moreover, there will be a good application prospect in practice.
Recursive Lagrangian dynamic modeling and simulation of multi-link spatial flexible manipulator arms
Institute of Scientific and Technical Information of China (English)
Ding-guo ZHANG
2009-01-01
The dynamics for multi-link spatial flexible manipulator arms consisting of n links and n rotary joints is investigated. Kinematics of both rotary-joint motion and link deformation is described by 4×4 homogenous transformation matrices, and the Lagrangian equations are used to derive the governing equations of motion of the system. In the modeling the recursive strategy for kinematics is adopted to improve the computational efficiency. Both the bending and torsional flexibility of the link are taken into account. Based on the present method a general-purpose software package for dynamic simulation is developed. Dynamic simulation of a spatial flexible manipulator arm is given as an example to validate the algorithm.
Modeling of the ground-to-SSFMB link networking features using SPW
Watson, John C.
1993-01-01
This report describes the modeling and simulation of the networking features of the ground-to-Space Station Freedom manned base (SSFMB) link using COMDISCO signal processing work-system (SPW). The networking features modeled include the implementation of Consultative Committee for Space Data Systems (CCSDS) protocols in the multiplexing of digitized audio and core data into virtual channel data units (VCDU's) in the control center complex and the demultiplexing of VCDU's in the onboard baseband signal processor. The emphasis of this work has been placed on techniques for modeling the CCSDS networking features using SPW. The objectives for developing the SPW models are to test the suitability of SPW for modeling networking features and to develop SPW simulation models of the control center complex and space station baseband signal processor for use in end-to-end testing of the ground-to-SSFMB S-band single access forward (SSAF) link.
Wang, Chunkao; Da, Yang
2014-01-01
The traditional quantitative genetics model was used as the unifying approach to derive six existing and new definitions of genomic additive and dominance relationships. The theoretical differences of these definitions were in the assumptions of equal SNP effects (equivalent to across-SNP standardization), equal SNP variances (equivalent to within-SNP standardization), and expected or sample SNP additive and dominance variances. The six definitions of genomic additive and dominance relationships on average were consistent with the pedigree relationships, but had individual genomic specificity and large variations not observed from pedigree relationships. These large variations may allow finding least related genomes even within the same family for minimizing genomic relatedness among breeding individuals. The six definitions of genomic relationships generally had similar numerical results in genomic best linear unbiased predictions of additive effects (GBLUP) and similar genomic REML (GREML) estimates of additive heritability. Predicted SNP dominance effects and GREML estimates of dominance heritability were similar within definitions assuming equal SNP effects or within definitions assuming equal SNP variance, but had differences between these two groups of definitions. We proposed a new measure of genomic inbreeding coefficient based on parental genomic co-ancestry coefficient and genomic additive correlation as a genomic approach for predicting offspring inbreeding level. This genomic inbreeding coefficient had the highest correlation with pedigree inbreeding coefficient among the four methods evaluated for calculating genomic inbreeding coefficient in a Holstein sample and a swine sample.
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that
Pike, Richard J.
2002-01-01
Terrain modeling, the practice of ground-surface quantification, is an amalgam of Earth science, mathematics, engineering, and computer science. The discipline is known variously as geomorphometry (or simply morphometry), terrain analysis, and quantitative geomorphology. It continues to grow through myriad applications to hydrology, geohazards mapping, tectonics, sea-floor and planetary exploration, and other fields. Dating nominally to the co-founders of academic geography, Alexander von Humboldt (1808, 1817) and Carl Ritter (1826, 1828), the field was revolutionized late in the 20th Century by the computer manipulation of spatial arrays of terrain heights, or digital elevation models (DEMs), which can quantify and portray ground-surface form over large areas (Maune, 2001). Morphometric procedures are implemented routinely by commercial geographic information systems (GIS) as well as specialized software (Harvey and Eash, 1996; Köthe and others, 1996; ESRI, 1997; Drzewiecki et al., 1999; Dikau and Saurer, 1999; Djokic and Maidment, 2000; Wilson and Gallant, 2000; Breuer, 2001; Guth, 2001; Eastman, 2002). The new Earth Surface edition of the Journal of Geophysical Research, specializing in surficial processes, is the latest of many publication venues for terrain modeling. This is the fourth update of a bibliography and introduction to terrain modeling (Pike, 1993, 1995, 1996, 1999) designed to collect the diverse, scattered literature on surface measurement as a resource for the research community. The use of DEMs in science and technology continues to accelerate and diversify (Pike, 2000a). New work appears so frequently that a sampling must suffice to represent the vast literature. This report adds 1636 entries to the 4374 in the four earlier publications1. Forty-eight additional entries correct dead Internet links and other errors found in the prior listings. Chronicling the history of terrain modeling, many entries in this report predate the 1999 supplement
Relay-Linking Models for Prominence and Obsolescence in Evolving Networks
Singh, Mayank; Goyal, Pawan; Mukherjee, Animesh; Chakrabarti, Soumen
2016-01-01
The rate at which nodes in evolving social networks acquire links (friends, citations) shows complex temporal dynamics. Elegant and simple models, such as preferential attachment and link copying, model only rich-gets-richer effects, not aging and decline. Recent aging models are complex and heavily parameterized; most involve estimating 1-3 parameters per node. These parameters are intrinsic: they explain decline in terms of events in the past of the same node, and do not explain, using the network, where the linking attention might go instead. We argue that traditional network characterization, or even per-node linking dynamics, are insufficient to judge the faithfulness of models. We propose a new temporal sketch of an evolving graph, and introduce three new characterizations of a network's temporal dynamics. Then we propose a new family of frugal aging models with no per-node parameters and only 2-3 global parameters. Our model is based on a surprising inversion or undoing of triangle completion, where an...
A minimal model for stabilization of biomolecules by hydrocarbon cross-linking
Hamacher, K.; Hübsch, A.; McCammon, J. A.
2006-04-01
Programmed cell death regulating protein motifs play an essential role in the development of an organism, its immune response, and disease-related cellular mechanisms. Among those motifs the BH3 domain of the BCL-2 family is found to be of crucial importance. Recent experiments showed how the isolated, otherwise unstructured BH3 peptide can be modified by a hydrocarbon linkage to regain function. We parametrized a reduced, dynamic model for the stability effects of such covalent cross-linking and confirmed that the model reproduces the reinforcement of the structural stability of the BH3 motif by cross-linking. We show that an analytically solvable model for thermostability around the native state is not capable of reproducing the stabilization effect. This points to the crucial importance of the peptide dynamics and the fluctuations neglected in the analytic model for the cross-linking system to function properly. This conclusion is supported by a thorough analysis of a simulated Gō model. The resulting model is suitable for rational design of generic cross-linking systems in silicio.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby
2017-03-01
Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the laser-induced breakdown spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element's emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple "sub-model" method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then "blending" these "sub-models" into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares (PLS) regression, is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.
Global Stability Analysis for an Internet Congestion Control Model with a Time-Varying Link Capacity
Rezaie, B; Analoui, M; Khorsandi, S
2009-01-01
In this paper, a global stability analysis is given for a rate-based congestion control system modeled by a nonlinear delayed differential equation. The model determines the dynamics of a single-source single-link network, with a time-varying capacity of link and a fixed communication delay. We obtain a sufficient delay-independent conditions on system parameters under which global asymptotic stability of the system is guarantied. The proof is based on an extension of Lyapunov-Krasovskii theorem for a class of nonlinear time-delay systems. The numerical simulations for a typical scenario justify the theoretical results.
ON THE DYNAMIC MODELING AND CONTROL OF 2-DOF PLANAR PARALLEL MECHANISM WITH FLEXIBLE LINKS
Institute of Scientific and Technical Information of China (English)
Luo Lei; Wang Shigang; Mo Jinqiu; Cai Jianguo
2005-01-01
The object of study is about dynamic modeling and control for a 2 degree-of-freedom (DOF) planar parallel mechanism (PM) with flexible links. The kinematic and dynamic equations are established according to the characteristics of mixed rigid and flexible structure. By using the singular perturbation approach (SPA), the model of the mechanism can be separated into slow and fast subsystems. Based on the feedback linearization theory and input shaping technique, the large scale rigid motion controller and the flexible link vibration controller can be designed separately to achieve fast and accurate positioning of the PM.
Hendriks, A Jan; Traas, Theo P; Huijbregts, Mark A J
2005-05-01
To protect thousands of species from thousands of chemicals released in the environment, various risk assessment tools have been developed. Here, we link quantitative structure-activity relationships (QSARs) for response concentrations in water (LC50) to critical concentrations in organisms (C50) by a model for accumulation in lipid or non-lipid phases versus water Kpw. The model indicates that affinity for neutral body components such as storage fat yields steep Kpw-Kow relationships, whereas slopes for accumulation in polar phases such as proteins are gentle. This pattern is confirmed by LC50 QSARs for different modes of action, such as neutral versus polar narcotics and organochlorine versus organophosphor insecticides. LC50 QSARs were all between 0.00002 and 0.2Kow(-1). After calibrating the model with the intercepts and, for the first time also, with the slopes of the LC50 QSARs, critical concentrations in organisms C50 are calculated and compared to an independent validation data set. About 60% of the variability in lethal body burdens C50 is explained by the model. Explanations for differences between estimated and measured levels for 11 modes of action are discussed. In particular, relationships between the critical concentrations in organisms C50 and chemical (Kow) or species (lipid content) characteristics are specified and tested. The analysis combines different models proposed before and provides a substantial extension of the data set in comparison to previous work. Moreover, the concept is applied to species (e.g., plants, lean animals) and substances (e.g., specific modes of action) that were scarcely studied quantitatively so far.
Timoshenko Beam Theory based Dynamic Modeling of Lightweight Flexible Link Robotic Manipulators
Loudini, Malik
2010-01-01
An investigation into the development of flexible link robot manipulators mathematical models, with a high modeling accuracy, using Timoshenko beam theory concepts has been presented. The emphasis has been, essentially, set on obtaining accurate and complete equations of motion that display the most relevant aspects of structural properties inherent to the modeled lightweight flexible robotic structure. In particular, two important damping mechanisms: internal structural viscoelasticity effec...
Selection and mutation in X-linked recessive diseases epidemiological model.
Verrilli, Francesca; Kebriaei, Hamed; Glielmo, Luigi; Corless, Martin; Del Vecchio, Carmen
2015-01-01
To describe the epidemiology of X-linked recessive diseases we developed a discrete time, structured, non linear mathematical model. The model allows for de novo mutations (i.e. affected sibling born to unaffected parents) and selection (i.e., distinct fitness rates depending on individual's health conditions). Applying Lyapunov direct method we found the domain of attraction of model's equilibrium point and studied the convergence properties of the degenerate equilibrium where only affected individuals survive.
The Chain-Link Fence Model: A Framework for Creating Security Procedures
Houghton, Robert F.
2013-01-01
A long standing problem in information technology security is how to help reduce the security footprint. Many specific proposals exist to address specific problems in information technology security. Most information technology solutions need to be repeatable throughout the course of an information systems lifecycle. The Chain-Link Fence Model is a new model for creating and implementing information technology procedures. This model was validated by two different methods: the first being int...
Mathematical Model of Bridge-Linked Photovoltaic Arrays Operating Under Irregular Conditions
Juan D. Bastidas-Rodríguez; Carlos A. Ramos-Paja; Luz A. Trejos-Grisales
2013-01-01
This paper presents a mathematical procedure to model a photovoltaic array (N rows and M columns) in bridge-linked configuration operating under regular and irregular conditions. The proposed procedure uses the ideal single-diode model representation for each photovoltaic module and the Shockley equation to represent each bypass diode. To pose the system of NxM non-linear equations required to obtain the voltages of each module of the array, the proposed model apply the Kirchhoff current law ...
Institute of Scientific and Technical Information of China (English)
WANG Ya-lin; MA Jie; GUI Wei-hua; YANG Chun-hua; ZHANG Chuan-fu
2006-01-01
A multi-objective intelligent coordinating optimization strategy based on qualitative and quantitative synthetic model for Pb-Zn sintering blending process was proposed to obtain optimal mixture ratio. The mechanism and neural network quantitative models for predicting compositions and rule models for expert reasoning were constructed based on statistical data and empirical knowledge. An expert reasoning method based on these models were proposed to solve blending optimization problem, including multi-objective optimization for the first blending process and area optimization for the second blending process, and to determine optimal mixture ratio which will meet the requirement of intelligent coordination. The results show that the qualified rates of agglomerate Pb, Zn and S compositions are increased by 7.1%, 6.5% and 6.9%, respectively, and the fluctuation of sintering permeability is reduced by 7.0 %, which effectively stabilizes the agglomerate compositions and the permeability.
Dick, Daniel G; Maxwell, Erin E
2015-07-01
The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the 'migration model'.
Li, Zhenping; Zhang, Xiang-Sun; Wang, Rui-Sheng; Liu, Hongwei; Zhang, Shihua
2013-01-01
Identification of communities in complex networks is an important topic and issue in many fields such as sociology, biology, and computer science. Communities are often defined as groups of related nodes or links that correspond to functional subunits in the corresponding complex systems. While most conventional approaches have focused on discovering communities of nodes, some recent studies start partitioning links to find overlapping communities straightforwardly. In this paper, we propose a new quantity function for link community identification in complex networks. Based on this quantity function we formulate the link community partition problem into an integer programming model which allows us to partition a complex network into overlapping communities. We further propose a genetic algorithm for link community detection which can partition a network into overlapping communities without knowing the number of communities. We test our model and algorithm on both artificial networks and real-world networks. The results demonstrate that the model and algorithm are efficient in detecting overlapping community structure in complex networks.
Quantitative Verification of a Force-based Model for Pedestrian Dynamics
Chraibi, Mohcine; Schadschneider, Andreas; Mackens, Wolfgang
2009-01-01
This paper introduces a spatially continuous force-based model for simulating pedestrian dynamics. The main intention of this work is the quantitative description of pedestrian movement through bottlenecks and in corridors. Measurements of flow and density at bottlenecks will be presented and compared with empirical data. Furthermore the fundamental diagram for the movement in a corridor is reproduced. The results of the proposed model show a good agreement with empirical data.
Human judgment vs. quantitative models for the management of ecological resources.
Holden, Matthew H; Ellner, Stephen P
2016-07-01
Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed
Fang, Jiansong; Pang, Xiaocong; Wu, Ping; Yan, Rong; Gao, Li; Li, Chao; Lian, Wenwen; Wang, Qi; Liu, Ai-lin; Du, Guan-hua
2016-05-01
A dataset of 67 berberine derivatives for the inhibition of butyrylcholinesterase (BuChE) was studied based on the combination of quantitative structure-activity relationships models, molecular docking, and molecular dynamics methods. First, a series of berberine derivatives were reported, and their inhibitory activities toward butyrylcholinesterase (BuChE) were evaluated. By 2D- quantitative structure-activity relationships studies, the best model built by partial least-square had a conventional correlation coefficient of the training set (R(2)) of 0.883, a cross-validation correlation coefficient (Qcv2) of 0.777, and a conventional correlation coefficient of the test set (Rpred2) of 0.775. The model was also confirmed by Y-randomization examination. In addition, the molecular docking and molecular dynamics simulation were performed to better elucidate the inhibitory mechanism of three typical berberine derivatives (berberine, C2, and C55) toward BuChE. The predicted binding free energy results were consistent with the experimental data and showed that the van der Waals energy term (ΔEvdw) difference played the most important role in differentiating the activity among the three inhibitors (berberine, C2, and C55). The developed quantitative structure-activity relationships models provide details on the fine relationship linking structure and activity and offer clues for structural modifications, and the molecular simulation helps to understand the inhibitory mechanism of the three typical inhibitors. In conclusion, the results of this study provide useful clues for new drug design and discovery of BuChE inhibitors from berberine derivatives.
Queueing model for an ATM multiplexer with unequal input/output link capacities
Long, Y. H.; Ho, T. K.; Rad, A. B.; Lam, S. P. S.
1998-10-01
We present a queuing model for an ATM multiplexer with unequal input/output link capacities in this paper. This model can be used to analyze the buffer behaviors of an ATM multiplexer which multiplexes low speed input links into a high speed output link. For this queuing mode, we assume that the input and output slot times are not equal, this is quite different from most analysis of discrete-time queues for ATM multiplexer/switch. In the queuing analysis, we adopt a correlated arrival process represented by the Discrete-time Batch Markovian Arrival Process. The analysis is based upon M/G/1 type queue technique which enables easy numerical computation. Queue length distributions observed at different epochs and queue length distribution seen by an arbitrary arrival cell when it enters the buffer are given.
Directory of Open Access Journals (Sweden)
Schook Lawrence B
2000-07-01
Full Text Available Abstract A strategy of multi-step minimal conditional regression analysis has been developed to determine the existence of statistical testing and parameter estimation for a quantitative trait locus (QTL that are unaffected by linked QTLs. The estimation of marker-QTL recombination frequency needs to consider only three cases: 1 the chromosome has only one QTL, 2 one side of the target QTL has one or more QTLs, and 3 either side of the target QTL has one or more QTLs. Analytical formula was derived to estimate marker-QTL recombination frequency for each of the three cases. The formula involves two flanking markers for case 1, two flanking markers plus a conditional marker for case 2, and two flanking markers plus two conditional markers for case 3. Each QTL variance and effect, and the total QTL variance were also estimated using analytical formulae. Simulation data show that the formulae for estimating marker-QTL recombination frequency could be a useful statistical tool for fine QTL mapping. With 1 000 observations, a QTL could be mapped to a narrow chromosome region of 1.5 cM if no linked QTL is present, and to a 2.8 cM chromosome region if either side of the target QTL has at least one linked QTL.
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) .
A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition
Directory of Open Access Journals (Sweden)
Amir Jamshidnezhad
2011-01-01
Full Text Available In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy.
Energy Technology Data Exchange (ETDEWEB)
Put, R. [FABI, Department of Analytical Chemistry and Pharmaceutical Technology, Pharmaceutical Institute, Vrije Universiteit Brussel (VUB), Laarbeeklaan 103, B-1090 Brussels (Belgium); Vander Heyden, Y. [FABI, Department of Analytical Chemistry and Pharmaceutical Technology, Pharmaceutical Institute, Vrije Universiteit Brussel (VUB), Laarbeeklaan 103, B-1090 Brussels (Belgium)], E-mail: yvanvdh@vub.ac.be
2007-10-29
In the literature an increasing interest in quantitative structure-retention relationships (QSRR) can be observed. After a short introduction on QSRR and other strategies proposed to deal with the starting point selection problem prior to method development in reversed-phase liquid chromatography, a number of interesting papers is reviewed, dealing with QSRR models for reversed-phase liquid chromatography. The main focus in this review paper is put on the different modelling methodologies applied and the molecular descriptors used in the QSRR approaches. Besides two semi-quantitative approaches (i.e. principal component analysis, and decision trees), these methodologies include artificial neural networks, partial least squares, uninformative variable elimination partial least squares, stochastic gradient boosting for tree-based models, random forests, genetic algorithms, multivariate adaptive regression splines, and two-step multivariate adaptive regression splines.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby
2017-01-01
Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.
Manganelli, Serena; Benfenati, Emilio; Manganaro, Alberto; Kulkarni, Sunil; Barton-Maclaren, Tara S; Honma, Masamitsu
2016-10-01
Existing Quantitative Structure-Activity Relationship (QSAR) models have limited predictive capabilities for aromatic azo compounds. In this study, 2 new models were built to predict Ames mutagenicity of this class of compounds. The first one made use of descriptors based on simplified molecular input-line entry system (SMILES), calculated with the CORAL software. The second model was based on the k-nearest neighbors algorithm. The statistical quality of the predictions from single models was satisfactory. The performance further improved when the predictions from these models were combined. The prediction results from other QSAR models for mutagenicity were also evaluated. Most of the existing models were found to be good at finding toxic compounds but resulted in many false positive predictions. The 2 new models specific for this class of compounds avoid this problem thanks to a larger set of related compounds as training set and improved algorithms.
What Works Clearinghouse, 2012
2012-01-01
The study, "A Model for Success: CART's Linked Learning Program Increases College Enrollment" examined whether students who enrolled in courses at a high school that combined academics and technical education had higher college enrollment rates than students who did not. The research described in this report does not meet What Works…
Modeling radio link performance in UMTS W-CDMA network simulations
DEFF Research Database (Denmark)
Klingenbrunn, Thomas; Mogensen, Preben Elgaard
2000-01-01
This article presents a method to model the W-CDMA radio receiver performance, which is usable in network simulation tools for third generation mobile cellular systems. The method represents a technique to combine link level simulations with network level simulations. The method is derived from [1...
Chaotic dynamics in the Volterra predator-prey model via linked twist maps
Directory of Open Access Journals (Sweden)
Marina Pireddu
2008-01-01
Full Text Available We prove the existence of infinitely many periodic solutions and complicated dynamics, due to the presence of a topological horseshoe, for the classical Volterra predator-prey model with a periodic harvesting. The proof relies on some recent results about chaotic planar maps combined with the study of geometric features which are typical of linked twist maps.
Large-Sample Theory for Generalized Linear Models with Non-natural Link and Random Variates
Institute of Scientific and Technical Information of China (English)
Jie-li Ding; Xi-ru Chen
2006-01-01
For generalized linear models (GLM), in the case that the regressors are stochastic and have different distributions and the observations of the responses may have different dimensionality, the asymptotic theory of the maximum likelihood estimate (MLE) of the parameters are studied under the assumption of a non-natural link function.
Art of science? The challenges of publishing peer reviewed papers based on linked models
Burrell, A.M.
2008-01-01
The methodology used in a linked model system is generally too voluminous and of insufficient interest to form the basis of a peer-reviewed journal article. To be readily acceptable to an economics journal, the simulation results should provide economic insight and contribute to the economics litera
Vlachos, Dion G.
2002-01-01
The focus of this presentation is on multiscale modeling in order to link processing, microstructure, and properties of materials. Overview of problems we study includes: Growth mechanisms in chemical and physical vapor epitaxy; thin films of zeolites for separation and sensing; thin Pd films for hydrogen separation and pattern formation by self-regulation routes.
The Chain-Link Fence Model: A Framework for Creating Security Procedures
Houghton, Robert F.
2013-01-01
A long standing problem in information technology security is how to help reduce the security footprint. Many specific proposals exist to address specific problems in information technology security. Most information technology solutions need to be repeatable throughout the course of an information systems lifecycle. The Chain-Link Fence Model is…
The Chain-Link Fence Model: A Framework for Creating Security Procedures
Houghton, Robert F.
2013-01-01
A long standing problem in information technology security is how to help reduce the security footprint. Many specific proposals exist to address specific problems in information technology security. Most information technology solutions need to be repeatable throughout the course of an information systems lifecycle. The Chain-Link Fence Model is…
Ontologies to Support RFID-Based Link between Virtual Models and Construction Components
DEFF Research Database (Denmark)
Sørensen, Kristian Birch; Christiansson, Per; Svidt, Kjeld
2010-01-01
the virtual models and the physical components in the construction process can improve the information handling and sharing in construction and building operation management. Such a link can be created by means of Radio Frequency Identification (RFID) technology. Ontologies play an important role...
Linking HR strategy, e-HR goals, architectures, and outcomes: a model and case study evidence.
Reddington, Martin; Martin, Graeme; Bondarouk, T.V.; Bondarouk, Tatiana; Ruel, H.; Ruel, Hubertus Johannes Maria; Looise, J.C.; Looise, Jan C.
2011-01-01
Building on our earlier model of the links between HR strategy, e-HR goals, architectures, and outcomes, we illustrate the relationship between some of these elements with data from three global organizations. In doing so, we aim to help academics and practitioners understand this increasingly
The Gender-Linked Language Effect: An Empirical Test of a General Process Model
Mulac, Anthony; Giles, Howard; Bradac, James J.; Palomares, Nicholas A.
2013-01-01
The gender-linked language effect (GLLE) is a phenomenon in which transcripts of female communicators are rated higher on Socio-Intellectual Status and Aesthetic Quality and male communicators are rated higher on Dynamism. This study proposed and tested a new general process model explanation for the GLLE, a central mediating element of which…
Quantitative 3D investigation of Neuronal network in mouse spinal cord model
Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.
2017-01-01
The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.
Graves, Philip L.
1989-01-01
A method of formulating the dynamical equations of a flexible, serial manipulator is presented, using the Method of Kinematic Influence. The resulting equations account for rigid body motion, structural motion due to link and joint flexibilities, and the coupling between these two motions. Nonlinear inertial loads are included in the equations. A finite order mode summation method is used to model flexibilities. The structural data may be obtained from experimental, finite element, or analytical methods. Nonlinear flexibilities may be included in the model.
[Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].
Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan
2005-06-01
Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.
Directory of Open Access Journals (Sweden)
Sorana D. Bolboaca
2009-01-01
Full Text Available Quantitative structure-activity relationship (qSAR models are used to understand how the structure and activity of chemical compounds relate. In the present study, 37 carboquinone derivatives were evaluated and two different qSAR models were developed using members of the Molecular Descriptors Family (MDF and the Molecular Descriptors Family on Vertices (MDFV. The usual parameters of regression models and the following estimators were defined and calculated in order to analyze the validity and to compare the models: Akaike?s information criteria (three parameters, Schwarz (or Bayesian information criterion, Amemiya prediction criterion, Hannan-Quinn criterion, Kubinyi function, Steiger's Z test, and Akaike's weights. The MDF and MDFV models proved to have the same estimation ability of the goodness-of-fit according to Steiger's Z test. The MDFV model proved to be the best model for the considered carboquinone derivatives according to the defined information and prediction criteria, Kubinyi function, and Akaike's weights.
Orfanos, Stelios
2010-01-01
In Greek traditional teaching a lot of significant concepts are introduced with a sequence that does not provide the students with all the necessary information required to comprehend. We consider that understanding concepts and the relations among them is greatly facilitated by the use of modelling tools, taking into account that the modelling process forces students to change their vague, imprecise ideas into explicit causal relationships. It is not uncommon to find students who are able to solve problems by using complicated relations without getting a qualitative and in-depth grip on them. Researchers have already shown that students often have a formal mathematical and physical knowledge without a qualitative understanding of basic concepts and relations." The aim of this communication is to present some of the results of our investigation into modelling activities related to kinematical concepts. For this purpose, we have used ModellingSpace, an environment that was especially designed to allow students from eleven to seventeen years old to express their ideas and gradually develop them. The ModellingSpace enables students to build their own models and offers the choice of observing directly simulations of real objects and/or all the other alternative forms of representations (tables of values, graphic representations and bar-charts). The students -in order to answer the questions- formulate hypotheses, they create models, they compare their hypotheses with the representations of their models and they modify or create other models when their hypotheses did not agree with the representations. In traditional ways of teaching, students are educated to utilize formulas as the most important strategy. Several times the students recall formulas in order to utilize them, without getting an in-depth understanding on them. Students commonly use the quantitative type of reasoning, since it is primarily used in teaching, although it may not be fully understood by them
Polymorphic ethyl alcohol as a model system for the quantitative study of glassy behaviour
Energy Technology Data Exchange (ETDEWEB)
Fischer, H.E.; Schober, H.; Gonzalez, M.A. [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France); Bermejo, F.J.; Fayos, R.; Dawidowski, J. [Consejo Superior de Investigaciones Cientificas, Madrid (Spain); Ramos, M.A.; Vieira, S. [Universidad Autonoma de Madrid (Spain)
1997-04-01
The nearly universal transport and dynamical properties of amorphous materials or glasses are investigated. Reasonably successful phenomenological models have been developed to account for these properties as well as the behaviour near the glass-transition, but quantitative microscopic models have had limited success. One hindrance to these investigations has been the lack of a material which exhibits glass-like properties in more than one phase at a given temperature. This report presents results of neutron-scattering experiments for one such material ordinary ethyl alcohol, which promises to be a model system for future investigations of glassy behaviour. (author). 8 refs.
Quantitative explanation of circuit experiments and real traffic using the optimal velocity model
Nakayama, Akihiro; Kikuchi, Macoto; Shibata, Akihiro; Sugiyama, Yuki; Tadaki, Shin-ichi; Yukawa, Satoshi
2016-04-01
We have experimentally confirmed that the occurrence of a traffic jam is a dynamical phase transition (Tadaki et al 2013 New J. Phys. 15 103034, Sugiyama et al 2008 New J. Phys. 10 033001). In this study, we investigate whether the optimal velocity (OV) model can quantitatively explain the results of experiments. The occurrence and non-occurrence of jammed flow in our experiments agree with the predictions of the OV model. We also propose a scaling rule for the parameters of the model. Using this rule, we obtain critical density as a function of a single parameter. The obtained critical density is consistent with the observed values for highway traffic.
Monogenic mouse models of autism spectrum disorders: Common mechanisms and missing links.
Hulbert, S W; Jiang, Y-H
2016-05-03
Autism spectrum disorders (ASDs) present unique challenges in the fields of genetics and neurobiology because of the clinical and molecular heterogeneity underlying these disorders. Genetic mutations found in ASD patients provide opportunities to dissect the molecular and circuit mechanisms underlying autistic behaviors using animal models. Ongoing studies of genetically modified models have offered critical insight into possible common mechanisms arising from different mutations, but links between molecular abnormalities and behavioral phenotypes remain elusive. The challenges encountered in modeling autism in mice demand a new analytic paradigm that integrates behavioral assessment with circuit-level analysis in genetically modified models with strong construct validity.
Kurowska, Zuzanna; Jewett, Michael; Brattås, Per Ludvik; Jimenez-Ferrer, Itzia; Kenéz, Xuyian; Björklund, Tomas; Nordström, Ulrika; Brundin, Patrik; Swanberg, Maria
2016-08-23
Motor symptoms in Parkinson's disease are attributed to degeneration of midbrain dopaminergic neurons (DNs). Heterozygosity for Engrailed-1 (En1), one of the key factors for programming and maintenance of DNs, results in a parkinsonian phenotype featuring progressive degeneration of DNs in substantia nigra pars compacta (SNpc), decreased striatal dopamine levels and swellings of nigro-striatal axons in the SwissOF1-En1+/- mouse strain. In contrast, C57Bl/6-En1+/- mice do not display this neurodegenerative phenotype, suggesting that susceptibility to En1 heterozygosity is genetically regulated. Our goal was to identify quantitative trait loci (QTLs) that regulate the susceptibility to PD-like neurodegenerative changes in response to loss of one En1 allele. We intercrossed SwissOF1-En1+/- and C57Bl/6 mice to obtain F2 mice with mixed genomes and analyzed number of DNs in SNpc and striatal axonal swellings in 120 F2-En1+/- 17 week-old male mice. Linkage analyses revealed 8 QTLs linked to number of DNs (p = 2.4e-09, variance explained = 74%), 7 QTLs linked to load of axonal swellings (p = 1.7e-12, variance explained = 80%) and 8 QTLs linked to size of axonal swellings (p = 7.0e-11, variance explained = 74%). These loci should be of prime interest for studies of susceptibility to Parkinson's disease-like damage in rodent disease models and considered in clinical association studies in PD.
A prosthesis-specific multi-link segment model of lower-limb amputee sprinting.
Rigney, Stacey M; Simmons, Anne; Kark, Lauren
2016-10-03
Lower-limb amputees commonly utilize non-articulating energy storage and return (ESAR) prostheses for high impact activities such as sprinting. Despite these prostheses lacking an articulating ankle joint, amputee gait analysis conventionally features a two-link segment model of the prosthetic foot. This paper investigated the effects of the selected link segment model׳s marker-set and geometry on a unilateral amputee sprinter׳s calculated lower-limb kinematics, kinetics and energetics. A total of five lower-limb models of the Ottobock(®) 1E90 Sprinter were developed, including two conventional shank-foot models that each used a different version of the Plug-in-Gait (PiG) marker-set to test the effect of prosthesis ankle marker location. Two Hybrid prosthesis-specific models were then developed, also using the PiG marker-sets, with the anatomical shank and foot replaced by prosthesis-specific geometry separated into two segments. Finally, a Multi-link segment (MLS) model was developed, consisting of six segments for the prosthesis as defined by a custom marker-set. All full-body musculoskeletal models were tested using four trials of experimental marker trajectories within OpenSim 3.2 (Stanford, California, USA) to find the affected and unaffected hip, knee and ankle kinematics, kinetics and energetics. The geometry of the selected lower-limb prosthesis model was found to significantly affect all variables on the affected leg (p variables on the affected leg, and none of the unaffected leg variables. The results indicate that the omission of prosthesis-specific spatial, inertial and elastic properties from full-body models significantly affects the calculated amputee gait characteristics, and we therefore recommend the implementation of a MLS model.
Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution.
Rohlfs, Rori V; Nielsen, Rasmus
2015-09-01
A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence
Incorporation of caffeine into a quantitative model of fatigue and sleep.
Puckeridge, M; Fulcher, B D; Phillips, A J K; Robinson, P A
2011-03-21
A recent physiologically based model of human sleep is extended to incorporate the effects of caffeine on sleep-wake timing and fatigue. The model includes the sleep-active neurons of the hypothalamic ventrolateral preoptic area (VLPO), the wake-active monoaminergic brainstem populations (MA), their interactions with cholinergic/orexinergic (ACh/Orx) input to MA, and circadian and homeostatic drives. We model two effects of caffeine on the brain due to competitive antagonism of adenosine (Ad): (i) a reduction in the homeostatic drive and (ii) an increase in cholinergic activity. By comparing the model output to experimental data, constraints are determined on the parameters that describe the action of caffeine on the brain. In accord with experiment, the ranges of these parameters imply significant variability in caffeine sensitivity between individuals, with caffeine's effectiveness in reducing fatigue being highly dependent on an individual's tolerance, and past caffeine and sleep history. Although there are wide individual differences in caffeine sensitivity and thus in parameter values, once the model is calibrated for an individual it can be used to make quantitative predictions for that individual. A number of applications of the model are examined, using exemplar parameter values, including: (i) quantitative estimation of the sleep loss and the delay to sleep onset after taking caffeine for various doses and times; (ii) an analysis of the system's stable states showing that the wake state during sleep deprivation is stabilized after taking caffeine; and (iii) comparing model output successfully to experimental values of subjective fatigue reported in a total sleep deprivation study examining the reduction of fatigue with caffeine. This model provides a framework for quantitatively assessing optimal strategies for using caffeine, on an individual basis, to maintain performance during sleep deprivation.
The optimal hyperspectral quantitative models for chlorophyll-a of chlorella vulgaris
Cheng, Qian; Wu, Xiuju
2009-09-01
Chlorophyll-a of Chlorella vulgaris had been related with spectrum. Based on hyperspectral measurement for Chlorella vulgaris, the hyperspectral characteristics of Chlorella vulgaris and their optimal hyperspectral quantitative models of chlorophyll-a (Chla) estimation were researched in situ experiment. The results showed that the optimal hyperspectral quantitative model of Chlorella vulgaris was Chla=180.5+1125787(R700)'+2.4 *109[(R700)']2 (P0Chlorella vulgaris, two reflectance crests were around 540 nm and 700 nm and their locations moved right while Chl-a concentration increased. The reflectance of Chlorella vulgaris decreases with Cha concentration increase in 540 nm, but on the contrary in 700nm.
Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa
We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.
From classical genetics to quantitative genetics to systems biology: modeling epistasis.
Directory of Open Access Journals (Sweden)
David L Aylor
2008-03-01
Full Text Available Gene expression data has been used in lieu of phenotype in both classical and quantitative genetic settings. These two disciplines have separate approaches to measuring and interpreting epistasis, which is the interaction between alleles at different loci. We propose a framework for estimating and interpreting epistasis from a classical experiment that combines the strengths of each approach. A regression analysis step accommodates the quantitative nature of expression measurements by estimating the effect of gene deletions plus any interaction. Effects are selected by significance such that a reduced model describes each expression trait. We show how the resulting models correspond to specific hierarchical relationships between two regulator genes and a target gene. These relationships are the basic units of genetic pathways and genomic system diagrams. Our approach can be extended to analyze data from a variety of experiments, multiple loci, and multiple environments.
Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model
Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi
2017-09-01
Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.
Jin, Yan; Huang, Jing-feng; Peng, Dai-liang
2009-04-01
Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.
Institute of Scientific and Technical Information of China (English)
Yoshito Hirata; Koichiro Akakura; Celestia S.Higano; Nicholas Bruchovsky; Kazuyuki Aihara
2012-01-01
If a mathematical model is to be used in the diagnosis,treatment,or prognosis of a disease,it must describe the inherent quantitative dynamics of the state.An ideal candidate disease is prostate cancer owing to the fact that it is characterized by an excellent biomarker,prostate-specific antigen (PSA),and also by a predictable response to treatment in the form of androgen suppression therapy.Despite a high initial response rate,the cancer will often relapse to a state of androgen independence which no longer responds to manipulations of the hormonal environment.In this paper,we present relevant background information and a quantitative mathematical model that potentially can be used in the optimal management of patients to cope with biochemical relapse as indicated by a rising PSA.
UML-based Modeling of Simulation System for Link16 Terminal Machine%基于UML的Link16端机仿真系统建模
Institute of Scientific and Technical Information of China (English)
狄元博; 王运栋; 陆小龙; 罗壮─
2010-01-01
Link16数据链是用来交换实时战术信息的通信、导航、识别系统,Link16端机仿真系统模拟Link16通信部分的工作原理,完成数据链网络中平台所应实现的信息分发功能.使用UML(统一建模语言,Unified Modeling Language)作为建模工具,对Link16端机仿真系统进行建模,能更好地描述端机仿真系统系统内部各对象之间的关系,使系统的扩充性、可重用性好,易于维护,且开发效率高.
Igor Shuryak; Ekaterina Dadachova
2016-01-01
Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil co...
DEFF Research Database (Denmark)
Han, Xue; Sandels, Claes; Zhu, Kun;
2013-01-01
, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation...... results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles....
Danielson, Steven R.; Held, Jason M.; Oo, May; Riley, Rebeccah; Gibson, Bradford W.; Andersen, Julie K.
2011-01-01
Differential cysteine oxidation within mitochondrial Complex I has been quantified in an in vivo oxidative stress model of Parkinson disease. We developed a strategy that incorporates rapid and efficient immunoaffinity purification of Complex I followed by differential alkylation and quantitative detection using sensitive mass spectrometry techniques. This method allowed us to quantify the reversible cysteine oxidation status of 34 distinct cysteine residues out of a total 130 present in muri...
Toxicity Mechanisms of the Food Contaminant Citrinin: Application of a Quantitative Yeast Model
Amparo Pascual-Ahuir; Elena Vanacloig-Pedros; Markus Proft
2014-01-01
Mycotoxins are important food contaminants and a serious threat for human nutrition. However, in many cases the mechanisms of toxicity for this diverse group of metabolites are poorly understood. Here we apply live cell gene expression reporters in yeast as a quantitative model to unravel the cellular defense mechanisms in response to the mycotoxin citrinin. We find that citrinin triggers a fast and dose dependent activation of stress responsive promoters such as GRE2 or SOD2. More specifical...
Hendriks, A.J.; Traas, T.P.; Huijbregts, M.A.J.
2005-01-01
To protect thousands of species from thousands of chemicals released in the environment, various risk assessment tools have been developed. Here, we link quantitative structure-activity relationships (QSARs) for response concentrations in water (LC50) to critical concentrations in organisms (C-50) b
Analytical model and figures of merit for filtered Microwave Photonic Links.
Gasulla, Ivana; Capmany, José
2011-09-26
The concept of filtered Microwave Photonic Links is proposed in order to provide the most general and versatile description of complex analog photonic systems. We develop a field propagation model where a global optical filter, characterized by its optical transfer function, embraces all the intermediate optical components in a linear link. We assume a non-monochromatic light source characterized by an arbitrary spectral distribution which has a finite linewidth spectrum and consider both intensity modulation and phase modulation with balanced and single detection. Expressions leading to the computation of the main figures of merit concerning the link gain, noise and intermodulation distortion are provided which, to our knowledge, are not available in the literature. The usefulness of this derivation resides in the capability to directly provide performance criteria results for complex links just by substituting in the overall closed-form formulas the numerical or measured optical transfer function characterizing the link. This theory is presented thus as a potential tool for a wide range of relevant microwave photonic application cases which is extendable to multiport radio over fiber systems.
Dynamic Modelling and Trajectory Tracking of Parallel Manipulator with Flexible Link
Directory of Open Access Journals (Sweden)
Chen Zhengsheng
2013-09-01
Full Text Available This paper mainly focuses on dynamic modelling and real‐time control for a parallel manipulator with flexible link. The Lagrange principle and assumed modes method (AMM substructure technique is presented to formulate the dynamic modelling of a two‐degrees‐of‐freedom (DOF parallel manipulator with flexible links. Then, the singular perturbation technique (SPT is used to decompose the nonlinear dynamic system into slow time‐scale and fast time‐scale subsystems. Furthermore, the SPT is employed to transform the differential algebraic equations (DAEs for kinematic constraints into explicit ordinary differential equations (ODEs, which makes real‐time control possible. In addition, a novel composite control scheme is presented; the computed torque control is applied for a slow subsystem and the H technique for the fast subsystem, taking account of the model uncertainty and outside disturbance. The simulation results show the composite control can effectively achieve fast and accurate tracking control.
Integration of CFD codes and advanced combustion models for quantitative burnout determination
Energy Technology Data Exchange (ETDEWEB)
Javier Pallares; Inmaculada Arauzo; Alan Williams [University of Zaragoza, Zaragoza (Spain). Centre of Research for Energy Resources and Consumption (CIRCE)
2007-10-15
CFD codes and advanced kinetics combustion models are extensively used to predict coal burnout in large utility boilers. Modelling approaches based on CFD codes can accurately solve the fluid dynamics equations involved in the problem but this is usually achieved by including simple combustion models. On the other hand, advanced kinetics combustion models can give a detailed description of the coal combustion behaviour by using a simplified description of the flow field, this usually being obtained from a zone-method approach. Both approximations describe correctly general trends on coal burnout, but fail to predict quantitative values. In this paper a new methodology which takes advantage of both approximations is described. In the first instance CFD solutions were obtained of the combustion conditions in the furnace in the Lamarmora power plant (ASM Brescia, Italy) for a number of different conditions and for three coals. Then, these furnace conditions were used as inputs for a more detailed chemical combustion model to predict coal burnout. In this, devolatilization was modelled using a commercial macromolecular network pyrolysis model (FG-DVC). For char oxidation an intrinsic reactivity approach including thermal annealing, ash inhibition and maceral effects, was used. Results from the simulations were compared against plant experimental values, showing a reasonable agreement in trends and quantitative values. 28 refs., 4 figs., 4 tabs.
A quantitative model of human DNA base excision repair. I. Mechanistic insights.
Sokhansanj, Bahrad A; Rodrigue, Garry R; Fitch, J Patrick; Wilson, David M
2002-04-15
Base excision repair (BER) is a multistep process involving the sequential activity of several proteins that cope with spontaneous and environmentally induced mutagenic and cytotoxic DNA damage. Quantitative kinetic data on single proteins of BER have been used here to develop a mathematical model of the BER pathway. This model was then employed to evaluate mechanistic issues and to determine the sensitivity of pathway throughput to altered enzyme kinetics. Notably, the model predicts considerably less pathway throughput than observed in experimental in vitro assays. This finding, in combination with the effects of pathway cooperativity on model throughput, supports the hypothesis of cooperation during abasic site repair and between the apurinic/apyrimidinic (AP) endonuclease, Ape1, and the 8-oxoguanine DNA glycosylase, Ogg1. The quantitative model also predicts that for 8-oxoguanine and hydrolytic AP site damage, short-patch Polbeta-mediated BER dominates, with minimal switching to the long-patch subpathway. Sensitivity analysis of the model indicates that the Polbeta-catalyzed reactions have the most control over pathway throughput, although other BER reactions contribute to pathway efficiency as well. The studies within represent a first step in a developing effort to create a predictive model for BER cellular capacity.
Assessment of sea ice-atmosphere links in CMIP5 models
Boland, Emma J. D.; Bracegirdle, Thomas J.; Shuckburgh, Emily F.
2016-09-01
The Arctic is currently undergoing drastic changes in climate, largely thought to be due to so-called `Arctic amplification', whereby local feedbacks enhance global warming. Recently, a number of observational and modelling studies have questioned what the implications of this change in Arctic sea ice extent might be for weather in Northern Hemisphere midlatitudes, and in particular whether recent extremely cold winters such as 2009/10 might be consistent with an influence from observed Arctic sea ice decline. However, the proposed mechanisms for these links have not been consistently demonstrated. In a uniquely comprehensive cross-season and cross-model study, we show that the CMIP5 models provide no support for a relationship between declining Arctic sea ice and a negative NAM, or between declining Barents-Kara sea ice and cold European temperatures. The lack of evidence for the proposed links is consistent with studies that report a low signal-to-noise ratio in these relationships. These results imply that, whilst links may exist between declining sea ice and extreme cold weather events in the Northern Hemisphere, the CMIP5 model experiments do not show this to be a leading order effect in the long-term. We argue that this is likely due to a combination of the limitations of the CMIP5 models and an indication of other important long-term influences on Northern Hemisphere climate.
Assessment of sea ice-atmosphere links in CMIP5 models
Boland, Emma J. D.; Bracegirdle, Thomas J.; Shuckburgh, Emily F.
2017-07-01
The Arctic is currently undergoing drastic changes in climate, largely thought to be due to so-called `Arctic amplification', whereby local feedbacks enhance global warming. Recently, a number of observational and modelling studies have questioned what the implications of this change in Arctic sea ice extent might be for weather in Northern Hemisphere midlatitudes, and in particular whether recent extremely cold winters such as 2009/10 might be consistent with an influence from observed Arctic sea ice decline. However, the proposed mechanisms for these links have not been consistently demonstrated. In a uniquely comprehensive cross-season and cross-model study, we show that the CMIP5 models provide no support for a relationship between declining Arctic sea ice and a negative NAM, or between declining Barents-Kara sea ice and cold European temperatures. The lack of evidence for the proposed links is consistent with studies that report a low signal-to-noise ratio in these relationships. These results imply that, whilst links may exist between declining sea ice and extreme cold weather events in the Northern Hemisphere, the CMIP5 model experiments do not show this to be a leading order effect in the long-term. We argue that this is likely due to a combination of the limitations of the CMIP5 models and an indication of other important long-term influences on Northern Hemisphere climate.
Roy, Vivekananda; Evangelou, Evangelos; Zhu, Zhengyuan
2016-03-01
Spatial generalized linear mixed models (SGLMMs) are popular models for spatial data with a non-Gaussian response. Binomial SGLMMs with logit or probit link functions are often used to model spatially dependent binomial random variables. It is known that for independent binomial data, the robit regression model provides a more robust (against extreme observations) alternative to the more popular logistic and probit models. In this article, we introduce a Bayesian spatial robit model for spatially dependent binomial data. Since constructing a meaningful prior on the link function parameter as well as the spatial correlation parameters in SGLMMs is difficult, we propose an empirical Bayes (EB) approach for the estimation of these parameters as well as for the prediction of the random effects. The EB methodology is implemented by efficient importance sampling methods based on Markov chain Monte Carlo (MCMC) algorithms. Our simulation study shows that the robit model is robust against model misspecification, and our EB method results in estimates with less bias than full Bayesian (FB) analysis. The methodology is applied to a Celastrus Orbiculatus data, and a Rhizoctonia root data. For the former, which is known to contain outlying observations, the robit model is shown to do better for predicting the spatial distribution of an invasive species. For the latter, our approach is doing as well as the classical models for predicting the disease severity for a root disease, as the probit link is shown to be appropriate. Though this article is written for Binomial SGLMMs for brevity, the EB methodology is more general and can be applied to other types of SGLMMs. In the accompanying R package geoBayes, implementations for other SGLMMs such as Poisson and Gamma SGLMMs are provided.
Smith, M R; Nichols, S T; Constable, R T; Henkelman, R M
1991-05-01
The resolution of magnetic resonance images reconstructed using the discrete Fourier transform (DFT) algorithm is limited by the effective window generated by the finite data length. The transient error reconstruction approach (TERA) is an alternative reconstruction method based on autoregressive moving average (ARMA) modeling techniques. Quantitative measurements comparing the truncation artifacts present during DFT and TERA image reconstruction show that the modeling method substantially reduces these artifacts on "full" (256 X 256), "truncated" (256 X 192), and "severely truncated" (256 X 128) data sets without introducing the global amplitude distortion found in other modeling techniques. Two global measures for determining the success of modeling are suggested. Problem areas for one-dimensional modeling are examined and reasons for considering two-dimensional modeling discussed. Analysis of both medical and phantom data reconstructions are presented.
Zhou, Yao; Wang, Huixuan; Lin, Wenshuang; Lin, Liqin; Gao, Yixian; Yang, Feng; Du, Mingming; Fang, Weiping; Huang, Jiale; Sun, Daohua; Li, Qingbiao
2013-10-01
Lacking of quantitative experimental data and/or kinetic models that could mathematically depict the redox chemistry and the crystallization issue, bottom-to-up formation kinetics of gold nanoparticles (GNPs) remains a challenge. We measured the dynamic regime of GNPs synthesized by l-ascorbic acid (representing a chemical approach) and/or foliar aqueous extract (a biogenic approach) via in situ spectroscopic characterization and established a redox-crystallization model which allows quantitative and separate parameterization of the nucleation and growth processes. The main results were simplified as the following aspects: (I) an efficient approach, i.e., the dynamic in situ spectroscopic characterization assisted with the redox-crystallization model, was established for quantitative analysis of the overall formation kinetics of GNPs in solution; (II) formation of GNPs by the chemical and the biogenic approaches experienced a slow nucleation stage followed by a growth stage which behaved as a mixed-order reaction, and different from the chemical approach, the biogenic method involved heterogeneous nucleation; (III) also, biosynthesis of flaky GNPs was a kinetic-controlled process favored by relatively slow redox chemistry; and (IV) though GNPs formation consists of two aspects, namely the redox chemistry and the crystallization issue, the latter was the rate-determining event that controls the dynamic regime of the whole physicochemical process.
Quantitative Regression Models for the Prediction of Chemical Properties by an Efficient Workflow.
Yin, Yongmin; Xu, Congying; Gu, Shikai; Li, Weihua; Liu, Guixia; Tang, Yun
2015-10-01
Rapid safety assessment is more and more needed for the increasing chemicals both in chemical industries and regulators around the world. The traditional experimental methods couldn't meet the current demand any more. With the development of the information technology and the growth of experimental data, in silico modeling has become a practical and rapid alternative for the assessment of chemical properties, especially for the toxicity prediction of organic chemicals. In this study, a quantitative regression workflow was built by KNIME to predict chemical properties. With this regression workflow, quantitative values of chemical properties can be obtained, which is different from the binary-classification model or multi-classification models that can only give qualitative results. To illustrate the usage of the workflow, two predictive models were constructed based on datasets of Tetrahymena pyriformis toxicity and Aqueous solubility. The qcv (2) and qtest (2) of 5-fold cross validation and external validation for both types of models were greater than 0.7, which implies that our models are robust and reliable, and the workflow is very convenient and efficient in prediction of various chemical properties. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Forward and adjoint radiance Monte Carlo models for quantitative photoacoustic imaging
Hochuli, Roman; Powell, Samuel; Arridge, Simon; Cox, Ben
2015-03-01
In quantitative photoacoustic imaging, the aim is to recover physiologically relevant tissue parameters such as chromophore concentrations or oxygen saturation. Obtaining accurate estimates is challenging due to the non-linear relationship between the concentrations and the photoacoustic images. Nonlinear least squares inversions designed to tackle this problem require a model of light transport, the most accurate of which is the radiative transfer equation. This paper presents a highly scalable Monte Carlo model of light transport that computes the radiance in 2D using a Fourier basis to discretise in angle. The model was validated against a 2D finite element model of the radiative transfer equation, and was used to compute gradients of an error functional with respect to the absorption and scattering coefficient. It was found that adjoint-based gradient calculations were much more robust to inherent Monte Carlo noise than a finite difference approach. Furthermore, the Fourier angular discretisation allowed very efficient gradient calculations as sums of Fourier coefficients. These advantages, along with the high parallelisability of Monte Carlo models, makes this approach an attractive candidate as a light model for quantitative inversion in photoacoustic imaging.
A linked hydrodynamic and water quality model for the Salton Sea
Chung, E.G.; Schladow, S.G.; Perez-Losada, J.; Robertson, D.M.
2008-01-01
A linked hydrodynamic and water quality model was developed and applied to the Salton Sea. The hydrodynamic component is based on the one-dimensional numerical model, DLM. The water quality model is based on a new conceptual model for nutrient cycling in the Sea, and simulates temperature, total suspended sediment concentration, nutrient concentrations, including PO4-3, NO3-1 and NH4+1, DO concentration and chlorophyll a concentration as functions of depth and time. Existing water temperature data from 1997 were used to verify that the model could accurately represent the onset and breakup of thermal stratification. 1999 is the only year with a near-complete dataset for water quality variables for the Salton Sea. The linked hydrodynamic and water quality model was run for 1999, and by adjustment of rate coefficients and other water quality parameters, a good match with the data was obtained. In this article, the model is fully described and the model results for reductions in external phosphorus load on chlorophyll a distribution are presented. ?? 2008 Springer Science+Business Media B.V.
Section-level modeling of musical audio for linking performances to scores in Turkish makam music
Holzapfel, André; Simsekli, Umut; Sentürk, Sertan; Cemgil, Ali Taylan
2015-01-01
Section linking aims at relating structural units in the notation of a piece of music to their occurrences in a performance of the piece. In this paper, we address this task by presenting a score-informed hierarchical Hidden Markov Model (HHMM) for modeling musical audio signals on the temporal level of sections present in a composition, where the main idea is to explicitly model the long range and hierarchical structure of music signals. So far, approaches based on HHMM or similar methods we...
Influence of atmospheric turbulence on OAM-based FSO system with use of realistic link model
Li, Ming; Yu, Zhongyuan; Cvijetic, Milorad
2016-04-01
We study the influence of atmospheric turbulence on OAM-based free-space optical (FSO) communication by using the Pump turbulence spectrum model which accurately characterizes the realistic FSO link. A comprehensive comparison is made between the Pump and Kolmogorov spectrum models with respect to the turbulence impact. The calculated results show that obtained turbulence-induced crosstalk is lower, which means that a higher channel capacity is projected when the realistic Pump spectrum is used instead of the Kolmogorov spectrum. We believe that our results prove that performance of practical OAM-based FSO is better than one predicted by using the original Kolmogorov turbulence model.
Powell, Jeff R; Welsh, Allana; Hallin, Sara
2015-07-01
Microorganisms drive biogeochemical processes, but linking these processes to real changes in microbial communities under field conditions is not trivial. Here, we present a model-based approach to estimate independent contributions of microbial community shifts to ecosystem properties. The approach was tested empirically, using denitrification potential as our model process, in a spatial survey of arable land encompassing a range of edaphic conditions and two agricultural production systems. Soil nitrate was the most important single predictor of denitrification potential (the change in Akaike's information criterion, corrected for sample size, ΔAIC(c) = 20.29); however, the inclusion of biotic variables (particularly the evenness and size of denitrifier communities [ΔAIC(c) = 12.02], and the abundance of one denitrifier genotype [ΔAIC(c) = 18.04]) had a substantial effect on model precision, comparable to the inclusion of abiotic variables (biotic R2 = 0.28, abiotic R2 = 0.50, biotic + abiotic R2 = 0.76). This approach provides a valuable tool for explicitly linking microbial communities to ecosystem functioning. By making this link, we have demonstrated that including aspects of microbial community structure and diversity in biogeochemical models can improve predictions of nutrient cycling in ecosystems and enhance our understanding of ecosystem functionality.
Directory of Open Access Journals (Sweden)
Philip M. Jedrzejewski
2014-03-01
Full Text Available Glycoproteins represent the largest group of the growing number of biologically-derived medicines. The associated glycan structures and their distribution are known to have a large impact on pharmacokinetics. A modelling framework was developed to provide a link from the extracellular environment and its effect on intracellular metabolites to the distribution of glycans on the constant region of an antibody product. The main focus of this work is the mechanistic in silico reconstruction of the nucleotide sugar donor (NSD metabolic network by means of 34 species mass balances and the saturation kinetics rates of the 60 metabolic reactions involved. NSDs are the co-substrates of the glycosylation process in the Golgi apparatus and their simulated dynamic intracellular concentration profiles were linked to an existing model describing the distribution of N-linked glycan structures of the antibody constant region. The modelling framework also describes the growth dynamics of the cell population by means of modified Monod kinetics. Simulation results match well to experimental data from a murine hybridoma cell line. The result is a modelling platform which is able to describe the product glycoform based on extracellular conditions. It represents a first step towards the in silico prediction of the glycoform of a biotherapeutic and provides a platform for the optimisation of bioprocess conditions with respect to product quality.
Kim, Marlene T; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2016-01-01
Publicly available bioassay data often contains errors. Curating massive bioassay data, especially high-throughput screening (HTS) data, for Quantitative Structure-Activity Relationship (QSAR) modeling requires the assistance of automated data curation tools. Using automated data curation tools are beneficial to users, especially ones without prior computer skills, because many platforms have been developed and optimized based on standardized requirements. As a result, the users do not need to extensively configure the curation tool prior to the application procedure. In this chapter, a freely available automatic tool to curate and prepare HTS data for QSAR modeling purposes will be described.
Ohno, Munekazu
2012-11-01
A quantitative phase-field model is developed for simulating microstructural pattern formation in nonisothermal solidification in dilute multicomponent alloys with arbitrary thermal and solutal diffusivities. By performing the matched asymptotic analysis, it is shown that the present model with antitrapping current terms reproduces the free-boundary problem of interest in the thin-interface limit. Convergence of the simulation outcome with decreasing the interface thickness is demonstrated for nonisothermal free dendritic growth in binary alloys and isothermal and nonisothermal free dendritic growth in a ternary alloy.
PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool
AlTurki, Musab
2011-01-01
Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.
Monakhova, Yulia B; Mushtakova, Svetlana P
2017-05-01
A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.
Kakimoto, Tetsuhiro; Okada, Kinya; Fujitaka, Keisuke; Nishio, Masashi; Kato, Tsuyoshi; Fukunari, Atsushi; Utsumi, Hiroyuki
2015-02-01
Podocytes are an essential component of the renal glomerular filtration barrier, their injury playing an early and important role in progressive renal dysfunction. This makes quantification of podocyte marker immunoreactivity important for early detection of glomerular histopathological changes. Here we have specifically applied a state-of-the-art automated computational method of glomerulus recognition, which we have recently developed, to study quantitatively podocyte markers in a model with selective podocyte injury, namely the rat puromycin aminonucleoside (PAN) nephropathy model. We also retrospectively investigated mRNA expression levels of these markers in glomeruli which were isolated from the same formalin-fixed, paraffin-embedded kidney samples by laser microdissection. Among the examined podocyte markers, the immunopositive area and mRNA expression level of both podoplanin and synaptopodin were decreased in PAN glomeruli. The immunopositive area of podocin showed a slight decrease in PAN glomeruli, while its mRNA level showed no change. We have also identified a novel podocyte injury marker β-enolase, which was increased exclusively by podocytes in PAN glomeruli, similarly to another widely used marker, desmin. Thus, we have shown the specific application of a state-of-the-art computational method and retrospective mRNA expression analysis to quantitatively study the changes of various podocyte markers. The proposed methods will open new avenues for quantitative elucidation of renal glomerular histopathology. Copyright © 2014 Elsevier GmbH. All rights reserved.
Sensitive quantitative assays for tau and phospho-tau in transgenic mouse models
Acker, Christopher M.; Forest, Stefanie K.; Zinkowski, Ray; Davies, Peter; d’Abramo, Cristina
2012-01-01
Transgenic mouse models have been an invaluable resource in elucidating the complex roles of Aβ and tau in Alzheimer’s disease. While many laboratories rely on qualitative or semi-quantitative techniques when investigating tau pathology, we have developed four Low-Tau Sandwich ELISAs that quantitatively assess different epitopes of tau relevant to Alzheimer’s disease: total tau, pSer-202, pThr-231, pSer-396/404. In this study, after comparing our assays to commercially available ELISAs, we demonstrate our assays high specificity and quantitative capabilities using brain homogenates from tau transgenic mice, htau, JNPL3, tau KO mice. All four ELISAs show excellent specificity for mouse and human tau, with no reactivity to tau KO animals. An age dependent increase of serum tau in both tau transgenic models was also seen. Taken together, these assays are valuable methods to quantify tau and phospho-tau levels in transgenic animals, by examining tau levels in brain and measuring tau as a potential serum biomarker. PMID:22727277
González, Gloria M; Márquez, Jazmín; Treviño-Rangel, Rogelio de J; Palma-Nicolás, José P; Garza-González, Elvira; Ceceñas, Luis A; Gerardo González, J
2013-10-01
Systemic disease is the most severe clinical form of fusariosis, and the treatment involves a challenge due to the refractory response to antifungals. Treatment for murine Fusarium solani infection has been described in models that employ CFU quantitation in organs as a parameter of therapeutic efficacy. However, CFU counts do not precisely reproduce the amount of cells for filamentous fungi such as F. solani. In this study, we developed a murine model of disseminated fusariosis and compared the fungal burden with two methods: CFU and quantitative PCR. ICR and BALB/c mice received an intravenous injection of 1 × 10(7) conidia of F. solani per mouse. On days 2, 5, 7, and 9, mice from each mice strain were killed. The spleen and kidneys of each animal were removed and evaluated by qPCR and CFU determinations. Results from CFU assay indicated that the spleen and kidneys had almost the same fungal burden in both BALB/c and ICR mice during the days of the evaluation. In the qPCR assay, the spleen and kidney of each mouse strain had increased fungal burden in each determination throughout the entire experiment. The fungal load determined by the qPCR assay was significantly greater than that determined from CFU measurements of tissue. qPCR could be considered as a tool for quantitative evaluation of fungal burden in experimental disseminated F. solani infection.
Coupled dynamics of node and link states in complex networks: A model for language competition
Carro, Adrián; Miguel, Maxi San
2016-01-01
Inspired by language competition processes, we present a model of coupled evolution of node and link states. In particular, we focus on the interplay between the use of a language and the preference or attitude of the speakers towards it, which we model, respectively, as a property of the interactions between speakers (a link state) and as a property of the speakers themselves (a node state). Furthermore, we restrict our attention to the case of two socially equivalent languages and to socially inspired network topologies based on a mechanism of triadic closure. As opposed to most of the previous literature, where language extinction is an inevitable outcome of the dynamics, we find a broad range of possible asymptotic configurations, which we classify as: frozen extinction states, frozen coexistence states, and dynamically trapped coexistence states. Moreover, metastable coexistence states with very long survival times and displaying a non-trivial dynamics are found to be abundant. Interestingly, a system si...
Quantum Link Models with Many Rishon Flavors and with Many Colors
Bär, O; Schlittgen, B; Wiese, U J
2002-01-01
Quantum link models are a novel formulation of gauge theories in terms of discrete degrees of freedom. These degrees of freedom are described by quantum operators acting in a finite-dimensional Hilbert space. We show that for certain representations of the operator algebra, the usual Yang-Mills action is recovered in the continuum limit. The quantum operators can be expressed as bilinears of fermionic creation and annihilation operators called rishons. Using the rishon representation the quantum link Hamiltonian can be expressed entirely in terms of color-neutral operators. This allows us to study the large N_c limit of this model. In the 't Hooft limit we find an area law for the Wilson loop and a mass gap. Furthermore, the strong coupling expansion is a topological expansion in which graphs with handles and boundaries are suppressed.
Quantum link models with many rishon flavors and with many colors
Bär, O.; Brower, R.; Schlittgen, B.; Wiese, U.-J.
2002-03-01
Quantum link models are a novel formulation of gauge theories in terms of discrete degrees of freedom. These degrees of freedom are described by quantum operators acting in a finite-dimensional Hilbert space. We show that for certain representations of the operator algebra, the usual Yang-Mills action is recovered in the continuum limit. The quantum operators can be expressed as bilinears of fermionic creation and annihilation operators called rishons. Using the rishon representation the quantum link Hamiltonian can be expressed entirely in terms of color-neutral operators. This allows us to study the large N tc limit of this model. In the 't Hooft limit we find an area law for the Wilson loop and a mass gap. Furthermore, the strong coupling expansion is a topological expansion in which graphs with handles and boundaries are suppressed.
Simulink models for performance analysis of high speed DQPSK modulated optical link
Sharan, Lucky; Rupanshi, Chaubey, V. K.
2016-03-01
This paper attempts to present the design approach for development of simulation models to study and analyze the transmission of 10 Gbps DQPSK signal over a single channel Peer to Peer link using Matlab Simulink. The simulation model considers the different optical components used in link design with their behavior represented initially by theoretical interpretation, including the transmitter topology, Mach Zehnder Modulator(MZM) module and, the propagation model for optical fibers etc. thus allowing scope for direct realization in experimental configurations. It provides the flexibility to incorporate the various photonic components as either user-defined or fixed and, can also be enhanced or removed from the model as per the design requirements. We describe the detailed operation and need of every component model and its representation in Simulink blocksets. Moreover the developed model can be extended in future to support Dense Wavelength Division Multiplexing (DWDM) system, thereby allowing high speed transmission with N × 40 Gbps systems. The various compensation techniques and their influence on system performance can be easily investigated by using such models.
Simulink models for performance analysis of high speed DQPSK modulated optical link
Energy Technology Data Exchange (ETDEWEB)
Sharan, Lucky, E-mail: luckysharan@pilani.bits-pilani.ac.in; Rupanshi,, E-mail: f2011222@pilani.bits-pilani.ac.in; Chaubey, V. K., E-mail: vkc@pilani.bits-pilani.ac.in [EEE Department, BITS-Pilani, Rajasthan, 333031 (India)
2016-03-09
This paper attempts to present the design approach for development of simulation models to study and analyze the transmission of 10 Gbps DQPSK signal over a single channel Peer to Peer link using Matlab Simulink. The simulation model considers the different optical components used in link design with their behavior represented initially by theoretical interpretation, including the transmitter topology, Mach Zehnder Modulator(MZM) module and, the propagation model for optical fibers etc. thus allowing scope for direct realization in experimental configurations. It provides the flexibility to incorporate the various photonic components as either user-defined or fixed and, can also be enhanced or removed from the model as per the design requirements. We describe the detailed operation and need of every component model and its representation in Simulink blocksets. Moreover the developed model can be extended in future to support Dense Wavelength Division Multiplexing (DWDM) system, thereby allowing high speed transmission with N × 40 Gbps systems. The various compensation techniques and their influence on system performance can be easily investigated by using such models.
A hierarchical statistical model for estimating population properties of quantitative genes
Directory of Open Access Journals (Sweden)
Wu Rongling
2002-06-01
Full Text Available Abstract Background Earlier methods for detecting major genes responsible for a quantitative trait rely critically upon a well-structured pedigree in which the segregation pattern of genes exactly follow Mendelian inheritance laws. However, for many outcrossing species, such pedigrees are not available and genes also display population properties. Results In this paper, a hierarchical statistical model is proposed to monitor the existence of a major gene based on its segregation and transmission across two successive generations. The model is implemented with an EM algorithm to provide maximum likelihood estimates for genetic parameters of the major locus. This new method is successfully applied to identify an additive gene having a large effect on stem height growth of aspen trees. The estimates of population genetic parameters for this major gene can be generalized to the original breeding population from which the parents were sampled. A simulation study is presented to evaluate finite sample properties of the model. Conclusions A hierarchical model was derived for detecting major genes affecting a quantitative trait based on progeny tests of outcrossing species. The new model takes into account the population genetic properties of genes and is expected to enhance the accuracy, precision and power of gene detection.
A quantitative model to assess Social Responsibility in Environmental Science and Technology.
Valcárcel, M; Lucena, R
2014-01-01
The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article.
A training set selection strategy for a universal near-infrared quantitative model.
Jia, Yan-Hua; Liu, Xu-Ping; Feng, Yan-Chun; Hu, Chang-Qin
2011-06-01
The purpose of this article is to propose an empirical solution to the problem of how many clusters of complex samples should be selected to construct the training set for a universal near infrared quantitative model based on the Naes method. The sample spectra were hierarchically classified into clusters by Ward's algorithm and Euclidean distance. If the sample spectra were classified into two clusters, the 1/50 of the largest Heterogeneity value in the cluster with larger variation was set as the threshold to determine the total number of clusters. One sample was then randomly selected from each cluster to construct the training set, and the number of samples in training set equaled the number of clusters. In this study, 98 batches of rifampicin capsules with API contents ranging from 50.1% to 99.4% were studied with this strategy. The root mean square errors of cross validation and prediction were 2.54% and 2.31% for the model for rifampicin capsules, respectively. Then, we evaluated this model in terms of outlier diagnostics, accuracy, precision, and robustness. We also used the strategy of training set sample selection to revalidate the models for cefradine capsules, roxithromycin tablets, and erythromycin ethylsuccinate tablets, and the results were satisfactory. In conclusion, all results showed that this training set sample selection strategy assisted in the quick and accurate construction of quantitative models using near-infrared spectroscopy.
A semi-quantitative model for risk appreciation and risk weighing
DEFF Research Database (Denmark)
Bos, Peter M.J.; Boon, Polly E.; van der Voet, Hilko
2009-01-01
Risk managers need detailed information on (1) the type of effect, (2) the size (severity) of the expected effect(s) and (3) the fraction of the population at risk to decide on well-balanced risk reduction measures. A previously developed integrated probabilistic risk assessment (IPRA) model...... provides quantitative information on these three parameters. A semi-quantitative tool is presented that combines information on these parameters into easy-readable charts that will facilitate risk evaluations of exposure situations and decisions on risk reduction measures. This tool is based on a concept...... of health impact categorization that has been successfully in force for several years within several emergency planning programs. Four health impact categories are distinguished: No-Health Impact, Low-Health Impact, Moderate-Health Impact and Severe-Health Impact. Two different charts are presented...
Oliveira, Manuel Au-Yong; Ferreira, João José Pinto
2011-01-01
We intend to use multiple case studies to develop a theoretical model concerning the contemporary phenomenon of organizational innovativeness and its link to interoperability. We are interested in particular in interoperability as pertaining to people and organizations able to operate in conjunction (together) to produce innovation. Interoperability can be defined as “the ability of a system or an organization to work seamless[ly] with other systems or organization[s] without any special effo...
Modelling bacterial growth in quantitative microbiological risk assessment: is it possible?
Nauta, Maarten J
2002-03-01
Quantitative microbiological risk assessment (QMRA), predictive modelling and HACCP may be used as tools to increase food safety and can be integrated fruitfully for many purposes. However, when QMRA is applied for public health issues like the evaluation of the status of public health, existing predictive models may not be suited to model bacterial growth. In this context, precise quantification of risks is more important than in the context of food manufacturing alone. In this paper, the modular process risk model (MPRM) is briefly introduced as a QMRA modelling framework. This framework can be used to model the transmission of pathogens through any food pathway, by assigning one of six basic processes (modules) to each of the processing steps. Bacterial growth is one of these basic processes. For QMRA, models of bacterial growth need to be expressed in terms of probability, for example to predict the probability that a critical concentration is reached within a certain amount of time. In contrast, available predictive models are developed and validated to produce point estimates of population sizes and therefore do not fit with this requirement. Recent experience from a European risk assessment project is discussed to illustrate some of the problems that may arise when predictive growth models are used in QMRA. It is suggested that a new type of predictive models needs to be developed that incorporates modelling of variability and uncertainty in growth.
New ghost-node method for linking different models with varied grid refinement
James, S.C.; Dickinson, J.E.; Mehl, S.W.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Eddebbarh, A.-A.
2006-01-01
A flexible, robust method for linking grids of locally refined ground-water flow models constructed with different numerical methods is needed to address a variety of hydrologic problems. This work outlines and tests a new ghost-node model-linking method for a refined "child" model that is contained within a larger and coarser "parent" model that is based on the iterative method of Steffen W. Mehl and Mary C. Hill (2002, Advances in Water Res., 25, p. 497-511; 2004, Advances in Water Res., 27, p. 899-912). The method is applicable to steady-state solutions for ground-water flow. Tests are presented for a homogeneous two-dimensional system that has matching grids (parent cells border an integer number of child cells) or nonmatching grids. The coupled grids are simulated by using the finite-difference and finite-element models MODFLOW and FEHM, respectively. The simulations require no alteration of the MODFLOW or FEHM models and are executed using a batch file on Windows operating systems. Results indicate that when the grids are matched spatially so that nodes and child-cell boundaries are aligned, the new coupling technique has error nearly equal to that when coupling two MODFLOW models. When the grids are nonmatching, model accuracy is slightly increased compared to that for matching-grid cases. Overall, results indicate that the ghost-node technique is a viable means to couple distinct models because the overall head and flow errors relative to the analytical solution are less than if only the regional coarse-grid model was used to simulate flow in the child model's domain.
Majda, Andrew J; Gershgorin, Boris
2011-08-02
Understanding and improving the predictive skill of imperfect models for complex systems in their response to external forcing is a crucial issue in diverse applications such as for example climate change science. Equilibrium statistical fidelity of the imperfect model on suitable coarse-grained variables is a necessary but not sufficient condition for this predictive skill, and elementary examples are given here demonstrating this. Here, with equilibrium statistical fidelity of the imperfect model, a direct link is developed between the predictive fidelity of specific test problems in the training phase where the perfect natural system is observed and the predictive skill for the forced response of the imperfect model by combining appropriate concepts from information theory with other concepts based on the fluctuation dissipation theorem. Here a suite of mathematically tractable models with nontrivial eddy diffusivity, variance, and intermittent non-Gaussian statistics mimicking crucial features of atmospheric tracers together with stochastically forced standard eddy diffusivity approximation with model error are utilized to illustrate this link.
Modelling the multidimensional niche by linking functional traits to competitive performance.
Maynard, Daniel S; Leonard, Kenneth E; Drake, John M; Hall, David W; Crowther, Thomas W; Bradford, Mark A
2015-07-22
Linking competitive outcomes to environmental conditions is necessary for understanding species' distributions and responses to environmental change. Despite this importance, generalizable approaches for predicting competitive outcomes across abiotic gradients are lacking, driven largely by the highly complex and context-dependent nature of biotic interactions. Here, we present and empirically test a novel niche model that uses functional traits to model the niche space of organisms and predict competitive outcomes of co-occurring populations across multiple resource gradients. The model makes no assumptions about the underlying mode of competition and instead applies to those settings where relative competitive ability across environments correlates with a quantifiable performance metric. To test the model, a series of controlled microcosm experiments were conducted using genetically related strains of a widespread microbe. The model identified trait microevolution and performance differences among strains, with the predicted competitive ability of each organism mapped across a two-dimensional carbon and nitrogen resource space. Areas of coexistence and competitive dominance between strains were identified,and the predicted competitive outcomes were validated in approximately 95% of the pairings. By linking trait variation to competitive ability, our work demonstrates a generalizable approach for predicting and modelling competitive outcomes across changing environmental contexts.
Rieger, TR; Musante, CJ
2016-01-01
Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777
Precise Quantitative Analysis of Probabilistic Business Process Model and Notation Workflows
DEFF Research Database (Denmark)
Herbert, Luke Thomas; Sharp, Robin
2013-01-01
We present a framework for modeling and analysis of real-world business workflows. We present a formalized core subset of the business process modeling and notation (BPMN) and then proceed to extend this language with probabilistic nondeterministic branching and general-purpose reward annotations....... We present an algorithm for the translation of such models into Markov decision processes (MDP) expressed in the syntax of the PRISM model checker. This enables precise quantitative analysis of business processes for the following properties: transient and steady-state probabilities, the timing......, occurrence and ordering of events, reward-based properties, and best- and worst- case scenarios. We develop a simple example of medical workflow and demonstrate the utility of this analysis in accurate provisioning of drug stocks. Finally, we suggest a path to building upon these techniques to cover...
Influence of gender constancy and social power on sex-linked modeling.
Bussey, K; Bandura, A
1984-12-01
Competing predictions derived from cognitive-developmental theory and social learning theory concerning sex-linked modeling were tested. In cognitive-developmental theory, gender constancy is considered a necessary prerequisite for the emulation of same-sex models, whereas according to social learning theory, sex-role development is promoted through a vast system of social influences with modeling serving as a major conveyor of sex role information. In accord with social learning theory, even children at a lower level of gender conception emulated same-sex models in preference to opposite-sex ones. Level of gender constancy was associated with higher emulation of both male and female models rather than operating as a selective determinant of modeling. This finding corroborates modeling as a basic mechanism in the sex-typing process. In a second experiment we explored the limits of same-sex modeling by pitting social power against the force of collective modeling of different patterns of behavior by male and female models. Social power over activities and rewarding resources produced cross-sex modeling in boys, but not in girls. This unexpected pattern of cross-sex modeling is explained by the differential sex-typing pressures that exist for boys and girls and socialization experiences that heighten the attractiveness of social power for boys.
Quantitative performance metrics for stratospheric-resolving chemistry-climate models
Directory of Open Access Journals (Sweden)
D. W. Waugh
2008-06-01
Full Text Available A set of performance metrics is applied to stratospheric-resolving chemistry-climate models (CCMs to quantify their ability to reproduce key processes relevant for stratospheric ozone. The same metrics are used to assign a quantitative measure of performance ("grade" to each model-observations comparison shown in Eyring et al. (2006. A wide range of grades is obtained, both for different diagnostics applied to a single model and for the same diagnostic applied to different models, highlighting the wide range in ability of the CCMs to simulate key processes in the stratosphere. No model scores high or low on all tests, but differences in the performance of models can be seen, especially for transport processes where several models get low grades on multiple tests. The grades are used to assign relative weights to the CCM projections of 21st century total ozone. However, only small differences are found between weighted and unweighted multi-model mean total ozone projections. This study raises several issues with the grading and weighting of CCMs that need further examination, but it does provide a framework that will enable quantification of model improvements and assignment of relative weights to the model projections.
Quantitative model of cell cycle arrest and cellular senescence in primary human fibroblasts.
Directory of Open Access Journals (Sweden)
Sascha Schäuble
Full Text Available Primary human fibroblasts in tissue culture undergo a limited number of cell divisions before entering a non-replicative "senescent" state. At early population doublings (PD, fibroblasts are proliferation-competent displaying exponential growth. During further cell passaging, an increasing number of cells become cell cycle arrested and finally senescent. This transition from proliferating to senescent cells is driven by a number of endogenous and exogenous stress factors. Here, we have developed a new quantitative model for the stepwise transition from proliferating human fibroblasts (P via reversibly cell cycle arrested (C to irreversibly arrested senescent cells (S. In this model, the transition from P to C and to S is driven by a stress function γ and a cellular stress response function F which describes the time-delayed cellular response to experimentally induced irradiation stress. The application of this model based on senescence marker quantification at the single-cell level allowed to discriminate between the cellular states P, C, and S and delivers the transition rates between the P, C and S states for different human fibroblast cell types. Model-derived quantification unexpectedly revealed significant differences in the stress response of different fibroblast cell lines. Evaluating marker specificity, we found that SA-β-Gal is a good quantitative marker for cellular senescence in WI-38 and BJ cells, however much less so in MRC-5 cells. Furthermore we found that WI-38 cells are more sensitive to stress than BJ and MRC-5 cells. Thus, the explicit separation of stress induction from the cellular stress response, and the differentiation between three cellular states P, C and S allows for the first time to quantitatively assess the response of primary human fibroblasts towards endogenous and exogenous stress during cellular ageing.
Korteland, Suze-Anne; Heimovaara, Timo
2015-03-01
Electrical resistivity tomography (ERT) is a geophysical technique that can be used to obtain three-dimensional images of the bulk electrical conductivity of the subsurface. Because the electrical conductivity is strongly related to properties of the subsurface and the flow of water it has become a valuable tool for visualization in many hydrogeological and environmental applications. In recent years, ERT is increasingly being used for quantitative characterization, which requires more detailed prior information than a conventional geophysical inversion for qualitative purposes. In addition, the careful interpretation of measurement and modelling errors is critical if ERT measurements are to be used in a quantitative way. This paper explores the quantitative determination of the electrical conductivity distribution of a cylindrical object placed in a water bath in a laboratory-scale tank. Because of the sharp conductivity contrast between the object and the water, a standard geophysical inversion using a smoothness constraint could not reproduce this target accurately. Better results were obtained by using the ERT measurements to constrain a model describing the geometry of the system. The posterior probability distributions of the parameters describing the geometry were estimated with the Markov chain Monte Carlo method DREAM(ZS). Using the ERT measurements this way, accurate estimates of the parameters could be obtained. The information quality of the measurements was assessed by a detailed analysis of the errors. Even for the uncomplicated laboratory setup used in this paper, errors in the modelling of the shape and position of the electrodes and the shape of the domain could be identified. The results indicate that the ERT measurements have a high information content which can be accessed by the inclusion of prior information and the consideration of measurement and modelling errors.
Directory of Open Access Journals (Sweden)
Zehui Wu
2017-01-01
Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.
Flow assignment model for quantitative analysis of diverting bulk freight from road to railway.
Liu, Chang; Lin, Boliang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian
2017-01-01
Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway.
Chang, Su-Wei; Choi, Seung Hoan; Li, Ke; Fleur, Rose Saint; Huang, Chengrui; Shen, Tong; Ahn, Kwangmi; Gordon, Derek; Kim, Wonkuk; Wu, Rongling; Mendell, Nancy R; Finch, Stephen J
2009-12-15
We examined the properties of growth mixture modeling in finding longitudinal quantitative trait loci in a genome-wide association study. Two software packages are commonly used in these analyses: Mplus and the SAS TRAJ procedure. We analyzed the 200 replicates of the simulated data with these programs using three tests: the likelihood-ratio test statistic, a direct test of genetic model coefficients, and the chi-square test classifying subjects based on the trajectory model's posterior Bayesian probability. The Mplus program was not effective in this application due to its computational demands. The distributions of these tests applied to genes not related to the trait were sensitive to departures from Hardy-Weinberg equilibrium. The likelihood-ratio test statistic was not usable in this application because its distribution was far from the expected asymptotic distributions when applied to markers with no genetic relation to the quantitative trait. The other two tests were satisfactory. Power was still substantial when we used markers near the gene rather than the gene itself. That is, growth mixture modeling may be useful in genome-wide association studies. For markers near the actual gene, there was somewhat greater power for the direct test of the coefficients and lesser power for the posterior Bayesian probability chi-square test.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Research on quantitative models of suspended sediment concentration (SSC) using remote sensing technology is very important to understand the scouring and siltation variation in harbors and water channels. Based on laboratory study of the relationship between different suspended sediment concentrations and reflectance spectra measured synchronously, quantitative inversion models of SSC based on single factor, band ratio and sediment parameter were developed, which provides an effective method to retrieve the SSC from satellite images. Results show that the b1 (430-500nm) and b3 (670-735nm) are the optimal wavelengths for the estimation of lower SSC and the b4 (780-835nm) is the optimal wavelength to estimate the higher SSC. Furthermore the band ratio B2/B3 can be used to simulate the variation of lower SSC better and the B4/B1 to estimate the higher SSC accurately. Also the inversion models developed by sediment parameters of higher and lower SSCs can get a relatively higher accuracy than the single factor and band ratio models.
Directory of Open Access Journals (Sweden)
Zhenpo Wang
2013-01-01
Full Text Available In order to adapt the matching and planning requirements of charging station in the electric vehicle (EV marketization application, with related layout theories of the gas stations, a location model of charging stations is established based on electricity consumption along the roads among cities. And a quantitative model of charging stations is presented based on the conversion of oil sales in a certain area. Both are combining the principle based on energy consuming equivalence substitution in process of replacing traditional vehicles with EVs. Defined data are adopted in the example analysis of two numerical case models and analyze the influence on charging station layout and quantity from the factors like the proportion of vehicle types and the EV energy consumption at the same time. The results show that the quantitative model of charging stations is reasonable and feasible. The number of EVs and the energy consumption of EVs bring more significant impact on the number of charging stations than that of vehicle type proportion, which provides a basis for decision making for charging stations construction layout in reality.
Regnier, P.; Dale, A. W.; Arndt, S.; LaRowe, D. E.; Mogollón, J.; Van Cappellen, P.
2011-05-01
Recent developments in the quantitative modeling of methane dynamics and anaerobic oxidation of methane (AOM) in marine sediments are critically reviewed. The first part of the review begins with a comparison of alternative kinetic models for AOM. The roles of bioenergetic limitations, intermediate compounds and biomass growth are highlighted. Next, the key transport mechanisms in multi-phase sedimentary environments affecting AOM and methane fluxes are briefly treated, while attention is also given to additional controls on methane and sulfate turnover, including organic matter mineralization, sulfur cycling and methane phase transitions. In the second part of the review, the structure, forcing functions and parameterization of published models of AOM in sediments are analyzed. The six-orders-of-magnitude range in rate constants reported for the widely used bimolecular rate law for AOM emphasizes the limited transferability of this simple kinetic model and, hence, the need for more comprehensive descriptions of the AOM reaction system. The derivation and implementation of more complete reaction models, however, are limited by the availability of observational data. In this context, we attempt to rank the relative benefits of potential experimental measurements that should help to better constrain AOM models. The last part of the review presents a compilation of reported depth-integrated AOM rates (ΣAOM). These rates reveal the extreme variability of ΣAOM in marine sediments. The model results are further used to derive quantitative relationships between ΣAOM and the magnitude of externally impressed fluid flow, as well as between ΣAOM and the depth of the sulfate-methane transition zone (SMTZ). This review contributes to an improved understanding of the global significance of the AOM process, and helps identify outstanding questions and future directions in the modeling of methane cycling and AOM in marine sediments.
Viscoelastic Model of Cross-Linked Polyethylene Including Effects of Temperature and Crystallinity
Olasz, L.; Gudmundson, P.
2005-12-01
Characterization of the mechanical behavior of cross-linked polyethylene (XLPE) commonly used in high voltage cable insulation was performed by an extensive set of isothermal uniaxial tensile relaxation tests. Tensile relaxation experiments were complemented by pressure-volume-temperature experiments as well as density and crystallinity measurements. Based on the experimental results, a viscoelastic power law model with four parameters was formulated, incorporating temperature and crystallinity dependence. It was found that a master curve can be developed by both horizontal and vertical shifting of the relaxation curves. The model was evaluated by making comparisons of the predicted stress responses with the measured responses in relaxation tests with transient temperature histories.
Linking individual-tree and whole-stand models for forest growth and yield prediction
Directory of Open Access Journals (Sweden)
Quang V Cao
2014-10-01
Full Text Available Background Different types of growth and yield models provide essential information for making informed decisions on how to manage forests. Whole-stand models often provide well-behaved outputs at the stand level, but lack information on stand structures. Detailed information from individual-tree models and size-class models typically suffers from accumulation of errors. The disaggregation method, in assuming that predictions from a whole-stand model are reliable, partitions these outputs to individual trees. On the other hand, the combination method seeks to improve stand-level predictions from both whole-stand and individual-tree models by combining them. Methods Data from 100 plots randomly selected from the Southwide Seed Source Study of loblolly pine (Pinus taeda L. were used to evaluate the unadjusted individual-tree model against the disaggregation and combination methods. Results Compared to the whole-stand model, the combination method did not show improvements in predicting stand attributes in this study. The combination method also did not perform as well as the disaggregation method in tree-level predictions. The disaggregation method provided the best predictions of tree- and stand-level survival and growth. Conclusions The disaggregation approach provides a link between individual-tree models and whole-stand models, and should be considered as a better alternative to the unadjusted tree model.
Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study
DEFF Research Database (Denmark)
Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming
2014-01-01
Cyber-physical systems integrate information and communication technology with the physical elements of a system, mainly for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy...
A Quantitative Model of Keyhole Instability Induced Porosity in Laser Welding of Titanium Alloy
Pang, Shengyong; Chen, Weidong; Wang, Wen
2014-06-01
Quantitative prediction of the porosity defects in deep penetration laser welding has generally been considered as a very challenging task. In this study, a quantitative model of porosity defects induced by keyhole instability in partial penetration CO2 laser welding of a titanium alloy is proposed. The three-dimensional keyhole instability, weld pool dynamics, and pore formation are determined by direct numerical simulation, and the results are compared to prior experimental results. It is shown that the simulated keyhole depth fluctuations could represent the variation trends in the number and average size of pores for the studied process conditions. Moreover, it is found that it is possible to use the predicted keyhole depth fluctuations as a quantitative measure of the average size of porosity. The results also suggest that due to the shadowing effect of keyhole wall humps, the rapid cooling of the surface of the keyhole tip before keyhole collapse could lead to a substantial decrease in vapor pressure inside the keyhole tip, which is suggested to be the mechanism by which shielding gas enters into the porosity.
Lackey, Daniel P; Carruth, Eric D; Lasher, Richard A; Boenisch, Jan; Sachse, Frank B; Hitchcock, Robert W
2011-11-01
Gap junctions play a fundamental role in intercellular communication in cardiac tissue. Various types of heart disease including hypertrophy and ischemia are associated with alterations of the spatial arrangement of gap junctions. Previous studies applied two-dimensional optical and electron-microscopy to visualize gap junction arrangements. In normal cardiomyocytes, gap junctions were primarily found at cell ends, but can be found also in more central regions. In this study, we extended these approaches toward three-dimensional reconstruction of gap junction distributions based on high-resolution scanning confocal microscopy and image processing. We developed methods for quantitative characterization of gap junction distributions based on analysis of intensity profiles along the principal axes of myocytes. The analyses characterized gap junction polarization at cell ends and higher-order statistical image moments of intensity profiles. The methodology was tested in rat ventricular myocardium. Our analysis yielded novel quantitative data on gap junction distributions. In particular, the analysis demonstrated that the distributions exhibit significant variability with respect to polarization, skewness, and kurtosis. We suggest that this methodology provides a quantitative alternative to current approaches based on visual inspection, with applications in particular in characterization of engineered and diseased myocardium. Furthermore, we propose that these data provide improved input for computational modeling of cardiac conduction.
Quantitative model for the generic 3D shape of ICMEs at 1 AU
Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.
2016-10-01
Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.
Linking Simple Economic Theory Models and the Cointegrated Vector AutoRegressive Model
DEFF Research Database (Denmark)
Møller, Niels Framroze
This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its...
Parametric links among Monte Carlo, phase-field, and sharp-interface models of interfacial motion.
Liu, Pu; Lusk, Mark T
2002-12-01
Parametric links are made among three mesoscale simulation paradigms: phase-field, sharp-interface, and Monte Carlo. A two-dimensional, square lattice, 1/2 Ising model is considered for the Monte Carlo method, where an exact solution for the interfacial free energy is known. The Monte Carlo mobility is calibrated as a function of temperature using Glauber kinetics. A standard asymptotic analysis relates the phase-field and sharp-interface parameters, and this allows the phase-field and Monte Carlo parameters to be linked. The result is derived without bulk effects but is then applied to a set of simulations with the bulk driving force included. An error analysis identifies the domain over which the parametric relationships are accurate.
DEFF Research Database (Denmark)
Rørbech, Jakob Thaysen; Vadenbo, Carl; Hellweg, Stefanie
2014-01-01
Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment...... results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247...... groups, according to method focus and modeling approach, to aid method selection within LCA....
Multi-factor models and signal processing techniques application to quantitative finance
Darolles, Serges; Jay, Emmanuelle
2013-01-01
With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere
An equivalent magnetic dipoles model for quantitative damage recognition of broken wire
Institute of Scientific and Technical Information of China (English)
TAN Ji-wen; ZHAN Wei-xia; LI Chun-jing; WEN Yan; SHU Jie
2005-01-01
By simplifying saturatedly magnetized wire-rope to magnetic dipoles of the same magnetic field strength, an equivalent magnetic dipoles model is developed and the measuring principle for recognising damage of broken wire was presented. The relevant calculation formulas were also deduced. A composite solution method about nonlinear optimization was given. An example was given to illustrate the use of the equivalent magnetic dipoles method for quantitative damage recognition, and demonstrates that the result of this method is consistent with the real situation, so the method is valid and practical.
Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles
Energy Technology Data Exchange (ETDEWEB)
Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung
2000-07-01
This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity.
Directory of Open Access Journals (Sweden)
Qian Wang
2016-01-01
Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.
Sun, Lidan; Sang, Mengmeng; Zheng, Chenfei; Wang, Dongyang; Shi, Hexin; Liu, Kaiyue; Guo, Yanfang; Cheng, Tangren; Zhang, Qixiang; Wu, Rongling
2017-05-30
Heterochrony is known as a developmental change in the timing or rate of ontogenetic events across phylogenetic lineages. It is a key concept synthesizing development into ecology and evolution to explore the mechanisms of how developmental processes impact on phenotypic novelties. A number of molecular experiments using contrasting organisms in developmental timing have identified specific genes involved in heterochronic variation. Beyond these classic approaches that can only identify single genes or pathways, quantitative models derived from current next-generation sequencing data serve as a more powerful tool to precisely capture heterochronic variation and systematically map a complete set of genes that contribute to heterochronic processes. In this opinion note, we discuss a computational framework of genetic mapping that can characterize heterochronic quantitative trait loci that determine the pattern and process of development. We propose a unifying model that charts the genetic architecture of heterochrony that perceives and responds to environmental perturbations and evolves over geologic time. The new model may potentially enhance our understanding of the adaptive value of heterochrony and its evolutionary origins, providing a useful context for designing new organisms that can best use future resources. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.
Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao
2015-08-01
Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies.
Meyer, Karin
2007-11-01
WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).
Modeling of microfluidic microbial fuel cells using quantitative bacterial transport parameters
Mardanpour, Mohammad Mahdi; Yaghmaei, Soheila; Kalantar, Mohammad
2017-02-01
The objective of present study is to analyze the dynamic modeling of bioelectrochemical processes and improvement of the performance of previous models using quantitative data of bacterial transport parameters. The main deficiency of previous MFC models concerning spatial distribution of biocatalysts is an assumption of initial distribution of attached/suspended bacteria on electrode or in anolyte bulk which is the foundation for biofilm formation. In order to modify this imperfection, the quantification of chemotactic motility to understand the mechanisms of the suspended microorganisms' distribution in anolyte and/or their attachment to anode surface to extend the biofilm is implemented numerically. The spatial and temporal distributions of the bacteria, as well as the dynamic behavior of the anolyte and biofilm are simulated. The performance of the microfluidic MFC as a chemotaxis assay is assessed by analyzing the bacteria activity, substrate variation, bioelectricity production rate and the influences of external resistance on the biofilm and anolyte's features.
Modeling the pairwise key distribution scheme in the presence of unreliable links
Yagan, Osman
2011-01-01
We investigate the secure connectivity of wireless sensor networks under the pairwise key distribution scheme of Chan et al.. Unlike recent work which was carried out under the assumption of full visibility, here we assume a (simplified) communication model where unreliable wireless links are represented as on/off channels. We present conditions on how to scale the model parameters so that the network i) has no secure node which is isolated and ii) is securely connected, both with high probability when the number of sensor nodes becomes large. The results are given in the form of zero-one laws, and exhibit significant differences with corresponding results in the full visibility case. Through simulations these zero-one laws are shown to be valid also under a more realistic communication model, i.e., the disk model.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
It is difficult to identify the source(s) of mixed oils from multiple source rocks, and in particular the relative contribution of each source rock. Artificial mixing experiments using typical crude oils and ratios of different biomarkers show that the relative contribution changes are non-linear when two oils with different concentrations of biomarkers mix with each other. This may result in an incorrect conclusion if ratios of biomarkers and a simple binary linear equation are used to calculate the contribution proportion of each end-member to the mixed oil. The changes of biomarker ratios with the mixing proportion of end-member oils in the trinal mixing model are more complex than in the binary mixing model. When four or more oils mix, the contribution proportion of each end-member oil to the mixed oil cannot be calculated using biomarker ratios and a simple formula. Artificial mixing experiments on typical oils reveal that the absolute concentrations of biomarkers in the mixed oil cause a linear change with mixing proportion of each end-member. Mathematical inferences verify such linear changes. Some of the mathematical calculation methods using the absolute concentrations or ratios of biomarkers to quantitatively determine the proportion of each end-member in the mixed oils are deduced from the results of artificial experiments and by theoretical inference. Ratio of two biomarker compounds changes as a hyperbola with the mixing proportion in the binary mixing model,as a hyperboloid in the trinal mixing model, and as a hypersurface when mixing more than three endmembers. The mixing proportion of each end-member can be quantitatively determined with these mathematical models, using the absolute concentrations and the ratios of biomarkers. The mathematical calculation model is more economical, convenient, accurate and reliable than conventional artificial mixing methods.
Landis, Joshua D.; Renshaw, Carl E.; Kaste, James M.
2016-05-01
Soil systems are known to be repositories for atmospheric carbon and metal contaminants, but the complex processes that regulate the introduction, migration and fate of atmospheric elements in soils are poorly understood. This gap in knowledge is attributable, in part, to the lack of an established chronometer that is required for quantifying rates of relevant processes. Here we develop and test a framework for adapting atmospheric lead-210 chronometry (210Pb; half-life 22 years) to soil systems. We propose a new empirical model, the Linked Radionuclide aCcumulation model (LRC, aka "lark"), that incorporates measurements of beryllium-7 (7Be; half-life 54 days) to account for 210Pb penetration of the soil surface during initial deposition, a process which is endemic to soils but omitted from conventional 210Pb models (e.g., the Constant Rate of Supply, CRS model) and their application to sedimentary systems. We validate the LRC model using the 1963-1964 peak in bomb-fallout americium-241 (241Am; half-life of 432 years) as an independent, corroborating time marker. In three different soils we locate a sharp 241Am weapons horizon at disparate depths ranging from 2.5 to 6 cm, but with concordant ages averaging 1967 ± 4 via the LRC model. Similarly, at one site contaminated with mercury (HgT) we find that the LRC model is consistent with the recorded history of Hg emission. The close agreement of Pb, Am and Hg behavior demonstrated here suggests that organo-metallic colloid formation and migration incorporates many trace metals in universal soil processes and that these processes may be described quantitatively using atmospheric 210Pb chronometry. The 210Pb models evaluated here show that migration rates of soil colloids on the order of 1 mm yr-1 are typical, but also that these rates vary systematically with depth and are attributable to horizon-specific processes of leaf-litter decay, eluviation and illuviation. We thus interpret 210Pb models to quantify (i) exposure
Wilson, J. P.; Fischer, W. W.
2010-12-01
Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative
The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...
Modeling Morphogenesis in silico and in vitro: Towards Quantitative, Predictive, Cell-based Modeling
R.M.H. Merks (Roeland); P. Koolwijk
2009-01-01
htmlabstractCell-based, mathematical models help make sense of morphogenesis—i.e. cells organizing into shape and pattern—by capturing cell behavior in simple, purely descriptive models. Cell-based models then predict the tissue-level patterns the cells produce collectively. The ﬁrst
Quantitative computational models of molecular self-assembly in systems biology
Thomas, Marcus; Schwartz, Russell
2017-06-01
Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.
Quantitative Validation of a Human Body Finite Element Model Using Rigid Body Impacts.
Vavalle, Nicholas A; Davis, Matthew L; Stitzel, Joel D; Gayzik, F Scott
2015-09-01
Validation is a critical step in finite element model (FEM) development. This study focuses on the validation of the Global Human Body Models Consortium full body average male occupant FEM in five localized loading regimes-a chest impact, a shoulder impact, a thoracoabdominal impact, an abdominal impact, and a pelvic impact. Force and deflection outputs from the model were compared to experimental traces and corridors scaled to the 50th percentile male. Predicted fractures and injury severity measures were compared to evaluate the model's injury prediction capabilities. The methods of ISO/TS 18571 were used to quantitatively assess the fit of model outputs to experimental force and deflection traces. The model produced peak chest, shoulder, thoracoabdominal, abdominal, and pelvis forces of 4.8, 3.3, 4.5, 5.1, and 13.0 kN compared to 4.3, 3.2, 4.0, 4.0, and 10.3 kN in the experiments, respectively. The model predicted rib and pelvic fractures related to Abbreviated Injury Scale scores within the ranges found experimentally all cases except the abdominal impact. ISO/TS 18571 scores for the impacts studied had a mean score of 0.73 with a range of 0.57-0.83. Well-validated FEMs are important tools used by engineers in advancing occupant safety.
Institute of Scientific and Technical Information of China (English)
QIN Hong; CHEN JingWen; WANG Ying; WANG Bin; LI XueHua; LI Fei; WANG YaNan
2009-01-01
Bioconcentration factors (BCFs) are of great importance for ecological risk assessment of organic chemicals. In this study, a quantitative structure-activity relationship (QSAR) model for fish BCFs of 8 groups of compounds was developed employing partial least squares (PLS) regression, based on lin-ear solvation energy relationship (LSER) theory and theoretical molecular structural descriptors. The guidelines for development and validation of QSAR models proposed by the Organization for Economic Co-operation and Development (OECD) were followed. The model results show that the main factors governing IogBCF are Connolly molecular area (CMA), average molecular polarizability (α) and mo-lecular weight (Mw). Thus molecular size plays a critical role in affecting the bioconcentration of organic pollutants in fish. For the established model, the multiple correlation coefficient square (R2Y)2=0.868, the root mean square error (RMSE)=0.553 log units, and the leave-many-out cross-validated Q2CUM=0.860, indicating its good goodness-of-fit and robustness. The model predictivity was evaluated by external validation, with the external explained variance (Q2EXT)=0.755 and RMSE=0.647 log units. Moreover, the applicability domain of the developed model was assessed and visualized by the Williams plot. The developed QSAR model can be used to predict fish logBCF for organic chemicals within the application domain.
Turco, Simona; Tardy, Isabelle; Frinking, Peter; Wijkstra, Hessel; Mischi, Massimo
2017-03-21
Ultrasound molecular imaging (USMI) is an emerging technique to monitor diseases at the molecular level by the use of novel targeted ultrasound contrast agents (tUCA). These consist of microbubbles functionalized with targeting ligands with high-affinity for molecular markers of specific disease processes, such as cancer-related angiogenesis. Among the molecular markers of angiogenesis, the vascular endothelial growth factor receptor 2 (VEGFR2) is recognized to play a major role. In response, the clinical-grade tUCA BR55 was recently developed, consisting of VEGFR2-targeting microbubbles which can flow through the entire circulation and accumulate where VEGFR2 is over-expressed, thus causing selective enhancement in areas of active angiogenesis. Discrimination between bound and free microbubbles is crucial to assess cancer angiogenesis. Currently, this is done non-quantitatively by looking at the late enhancement, about 10 min after injection, or by calculation of the differential targeted enhancement, requiring the application of a high-pressure ultrasound (US) burst to destroy all the microbubbles in the acoustic field and isolate the signal coming only from bound microbubbles. In this work, we propose a novel method based on mathematical modeling of the binding kinetics during the tUCA first pass, thus reducing the acquisition time and with no need for a destructive US burst. Fitting time-intensity curves measured with USMI by the proposed model enables the assessment of cancer angiogenesis at both the vascular and molecular levels. This is achieved by estimation of quantitative parameters related to the microvascular architecture and microbubble binding. The proposed method was tested in 11 prostate-tumor bearing rats by performing USMI after injection of BR55, and showed good agreement with current USMI methods. The novel information provided by the proposed method, possibly combined with the current non-quantitative methods, may bring deeper insight into
Turco, Simona; Tardy, Isabelle; Frinking, Peter; Wijkstra, Hessel; Mischi, Massimo
2017-03-01
Ultrasound molecular imaging (USMI) is an emerging technique to monitor diseases at the molecular level by the use of novel targeted ultrasound contrast agents (tUCA). These consist of microbubbles functionalized with targeting ligands with high-affinity for molecular markers of specific disease processes, such as cancer-related angiogenesis. Among the molecular markers of angiogenesis, the vascular endothelial growth factor receptor 2 (VEGFR2) is recognized to play a major role. In response, the clinical-grade tUCA BR55 was recently developed, consisting of VEGFR2-targeting microbubbles which can flow through the entire circulation and accumulate where VEGFR2 is over-expressed, thus causing selective enhancement in areas of active angiogenesis. Discrimination between bound and free microbubbles is crucial to assess cancer angiogenesis. Currently, this is done non-quantitatively by looking at the late enhancement, about 10 min after injection, or by calculation of the differential targeted enhancement, requiring the application of a high-pressure ultrasound (US) burst to destroy all the microbubbles in the acoustic field and isolate the signal coming only from bound microbubbles. In this work, we propose a novel method based on mathematical modeling of the binding kinetics during the tUCA first pass, thus reducing the acquisition time and with no need for a destructive US burst. Fitting time-intensity curves measured with USMI by the proposed model enables the assessment of cancer angiogenesis at both the vascular and molecular levels. This is achieved by estimation of quantitative parameters related to the microvascular architecture and microbubble binding. The proposed method was tested in 11 prostate-tumor bearing rats by performing USMI after injection of BR55, and showed good agreement with current USMI methods. The novel information provided by the proposed method, possibly combined with the current non-quantitative methods, may bring deeper insight into
Quantitative genetic modeling and inference in the presence of nonignorable missing data.
Steinsland, Ingelin; Larsen, Camilla Thorrud; Roulin, Alexandre; Jensen, Henrik
2014-06-01
Natural selection is typically exerted at some specific life stages. If natural selection takes place before a trait can be measured, using conventional models can cause wrong inference about population parameters. When the missing data process relates to the trait of interest, a valid inference requires explicit modeling of the missing process. We propose a joint modeling approach, a shared parameter model, to account for nonrandom missing data. It consists of an animal model for the phenotypic data and a logistic model for the missing process, linked by the additive genetic effects. A Bayesian approach is taken and inference is made using integrated nested Laplace approximations. From a simulation study we find that wrongly assuming that missing data are missing at random can result in severely biased estimates of additive genetic variance. Using real data from a wild population of Swiss barn owls Tyto alba, our model indicates that the missing individuals would display large black spots; and we conclude that genes affecting this trait are already under selection before it is expressed. Our model is a tool to correctly estimate the magnitude of both natural selection and additive genetic variance.
Linking the Power and Transport Sectors—Part 2: Modelling a Sector Coupling Scenario for Germany
Directory of Open Access Journals (Sweden)
Martin Robinius
2017-07-01
Full Text Available “Linking the power and transport sectors—Part 1” describes the general principle of “sector coupling” (SC, develops a working definition intended of the concept to be of utility to the international scientific community, contains a literature review that provides an overview of relevant scientific papers on this topic and conducts a rudimentary analysis of the linking of the power and transport sectors on a worldwide, EU and German level. The aim of this follow-on paper is to outline an approach to the modelling of SC. Therefore, a study of Germany as a case study was conducted. This study assumes a high share of renewable energy sources (RES contributing to the grid and significant proportion of fuel cell vehicles (FCVs in the year 2050, along with a dedicated hydrogen pipeline grid to meet hydrogen demand. To construct a model of this nature, the model environment “METIS” (models for energy transformation and integration systems we developed will be described in more detail in this paper. Within this framework, a detailed model of the power and transport sector in Germany will be presented in this paper and the rationale behind its assumptions described. Furthermore, an intensive result analysis for the power surplus, utilization of electrolysis, hydrogen pipeline and economic considerations has been conducted to show the potential outcomes of modelling SC. It is hoped that this will serve as a basis for researchers to apply this framework in future to models and analysis with an international focus.
Quantitative Modeling of Acid Wormholing in Carbonates- What Are the Gaps to Bridge
Qiu, Xiangdong
2013-01-01
Carbonate matrix acidization extends a well\\'s effective drainage radius by dissolving rock and forming conductive channels (wormholes) from the wellbore. Wormholing is a dynamic process that involves balance between the acid injection rate and reaction rate. Generally, injection rate is well defined where injection profiles can be controlled, whereas the reaction rate can be difficult to obtain due to its complex dependency on interstitial velocity, fluid composition, rock surface properties etc. Conventional wormhole propagation models largely ignore the impact of reaction products. When implemented in a job design, the significant errors can result in treatment fluid schedule, rate, and volume. A more accurate method to simulate carbonate matrix acid treatments would accomodate the effect of reaction products on reaction kinetics. It is the purpose of this work to properly account for these effects. This is an important step in achieving quantitative predictability of wormhole penetration during an acidzing treatment. This paper describes the laboratory procedures taken to obtain the reaction-product impacted kinetics at downhole conditions using a rotating disk apparatus, and how this new set of kinetics data was implemented in a 3D wormholing model to predict wormhole morphology and penetration velocity. The model explains some of the differences in wormhole morphology observed in limestone core flow experiments where injection pressure impacts the mass transfer of hydrogen ions to the rock surface. The model uses a CT scan rendered porosity field to capture the finer details of the rock fabric and then simulates the fluid flow through the rock coupled with reactions. Such a validated model can serve as a base to scale up to near wellbore reservoir and 3D radial flow geometry allowing a more quantitative acid treatment design.
Geerts, Hugo; Roberts, Patrick; Spiros, Athan; Potkin, Steven
2015-04-01
The concept of targeted therapies remains a holy grail for the pharmaceutical drug industry for identifying responder populations or new drug targets. Here we provide quantitative systems pharmacology as an alternative to the more traditional approach of retrospective responder pharmacogenomics analysis and applied this to the case of iloperidone in schizophrenia. This approach implements the actual neurophysiological effect of genotypes in a computer-based biophysically realistic model of human neuronal circuits, is parameterized with human imaging and pathology, and is calibrated by clinical data. We keep the drug pharmacology constant, but allowed the biological model coupling values to fluctuate in a restricted range around their calibrated values, thereby simulating random genetic mutations and representing variability in patient response. Using hypothesis-free Design of Experiments methods the dopamine D4 R-AMPA (receptor-alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptor coupling in cortical neurons was found to drive the beneficial effect of iloperidone, likely corresponding to the rs2513265 upstream of the GRIA4 gene identified in a traditional pharmacogenomics analysis. The serotonin 5-HT3 receptor-mediated effect on interneuron gamma-aminobutyric acid conductance was identified as the process that moderately drove the differentiation of iloperidone versus ziprasidone. This paper suggests that reverse-engineered quantitative systems pharmacology is a powerful alternative tool to characterize the underlying neurobiology of a responder population and possibly identifying new targets. © The Author(s) 2015.
Directory of Open Access Journals (Sweden)
Franceschini Barbara
2005-02-01
Full Text Available Abstract Background Modeling the complex development and growth of tumor angiogenesis using mathematics and biological data is a burgeoning area of cancer research. Architectural complexity is the main feature of every anatomical system, including organs, tissues, cells and sub-cellular entities. The vascular system is a complex network whose geometrical characteristics cannot be properly defined using the principles of Euclidean geometry, which is only capable of interpreting regular and smooth objects that are almost impossible to find in Nature. However, fractal geometry is a more powerful means of quantifying the spatial complexity of real objects. Methods This paper introduces the surface fractal dimension (Ds as a numerical index of the two-dimensional (2-D geometrical complexity of tumor vascular networks, and their behavior during computer-simulated changes in vessel density and distribution. Results We show that Ds significantly depends on the number of vessels and their pattern of distribution. This demonstrates that the quantitative evaluation of the 2-D geometrical complexity of tumor vascular systems can be useful not only to measure its complex architecture, but also to model its development and growth. Conclusions Studying the fractal properties of neovascularity induces reflections upon the real significance of the complex form of branched anatomical structures, in an attempt to define more appropriate methods of describing them quantitatively. This knowledge can be used to predict the aggressiveness of malignant tumors and design compounds that can halt the process of angiogenesis and influence tumor growth.
Sellbom, Martin; Arbisi, Paul A
2017-01-01
This special section considers 9 independent articles that seek to link the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008/ 2011 ) to contemporary models of psychopathology. Sellbom ( this issue ) maps the Specific Problems scales onto hierarchical psychopathology structures, whereas Romero, Toorabally, Burchett, Tarescavage, and Glassmire ( this issue ) and Shkalim, Almagor, and Ben-Porath ( this issue ) show evidence of linking the instruments' scales to diagnostic representations of common higher order psychopathology constructs. McCord, Achee, Cannon, Harrop, and Poynter ( this issue ) link the MMPI-2-RF scales to psychophysiological constructs inspired by the National Institute of Mental Health (NIMH) Research Domain Criteria. Sellbom and Smith ( this issue ) find support for MMPI-2-RF scale hypotheses in covering personality psychopathology in general, whereas Klein Haneveld, Kamphuis, Smid, and Forbey ( this issue ) and Kutchen et al. ( this issue ) demonstrate the utility of the MMPI-2-RF in capturing contemporary conceptualizations of the psychopathic personality. Finally, Franz, Harrop, and McCord ( this issue ) and Rogers et al. ( this issue ) mapped the MMPI-2-RF scales onto more specific transdiagnostic constructs reflecting interpersonal functioning and suicide behavior proneness, respectively.
Saeed, Aamer; Zaib, Sumera; Ashraf, Saba; Iftikhar, Javeria; Muddassar, Muhammad; Zhang, Kam Y J; Iqbal, Jamshed
2015-12-01
Alzheimer's disease is among the most widespread neurodegenerative disorder. Cholinesterases (ChEs) play an indispensable role in the control of cholinergic transmission and thus the acetylcholine level in the brain is enhanced by inhibition of ChEs. Coumarin linked thiourea derivatives were designed, synthesized and evaluated biologically in order to determine their inhibitory activity against acetylcholinesterases (AChE) and butyrylcholinesterases (BChE). The synthesized derivatives of coumarin linked thiourea compounds showed potential inhibitory activity against AChE and BChE. Among all the synthesized compounds, 1-(2-Oxo-2H-chromene-3-carbonyl)-3-(3-chlorophenyl)thiourea (2e) was the most potent inhibitor against AChE with an IC50 value of 0.04±0.01μM, while 1-(2-Oxo-2H-chromene-3-carbonyl)-3-(2-methoxyphenyl)thiourea (2b) showed the most potent inhibitory activity with an IC50 value of 0.06±0.02μM against BChE. Molecular docking simulations were performed using the homology models of both cholinesterases in order to explore the probable binding modes of inhibitors. Results showed that the novel synthesized coumarin linked thiourea derivatives are potential candidates to develop for potent and efficacious acetylcholinesterase (AChE) and butyrylcholinesterase (BChE) inhibitors.
Application of cross-linked and hydrolyzed arabinoxylans in baking of model rye bread.
Buksa, Krzysztof; Nowotna, Anna; Ziobro, Rafał
2016-02-01
The role of water extractable arabinoxylan with varying molar mass and structure (cross-linked vs. hydrolyzed) in the structure formation of rye bread was examined using a model bread. Instead of the normal flour, the dough contained starch, arabinoxylan and protein, which were isolated from rye wholemeal. It was observed that the applied mixes of these constituents result in a product closely resembling typical rye bread, even if arabinoxylan was modified (by cross-linking or hydrolysis). The levels of arabinoxylan required for bread preparation depended on its modification and mix composition. At 3% protein, the maximum applicable level of poorly soluble cross-linked arabinoxylan was 3%, as higher amounts of this preparation resulted in an extensively viscous dough and diminished bread volume. On the other hand highly soluble, hydrolyzed arabinoxylan could be used at a higher level (6%) together with larger amounts of rye protein (3% or 6%). Further addition of arabinoxylan leads to excessive water absorption, resulting in a decreased viscosity of the dough during baking and insufficient gas retention. Copyright © 2015 Elsevier Ltd. All rights reserved.
An atomistic model for cross-linked HNBR elastomers used in seals
Molinari, Nicola; Sutton, Adrian; Stevens, John; Mostofi, Arash
2015-03-01
Hydrogenated nitrile butadiene rubber (HNBR) is one of the most common elastomeric materials used for seals in the oil and gas industry. These seals sometimes suffer ``explosive decompression,'' a costly problem in which gases permeate a seal at the elevated temperatures and pressures pertaining in oil and gas wells, leading to rupture when the seal is brought back to the surface. The experimental evidence that HNBR and its unsaturated parent NBR have markedly different swelling properties suggests that cross-linking may occur during hydrogenation of NBR to produce HNBR. We have developed a code compatible with the LAMMPS molecular dynamics package to generate fully atomistic HNBR configurations by hydrogenating initial NBR structures. This can be done with any desired degree of cross-linking. The code uses a model of atomic interactions based on the OPLS-AA force-field. We present calculations of the dependence of a number of bulk properties on the degree of cross-linking. Using our atomistic representations of HNBR and NBR, we hope to develop a better molecular understanding of the mechanisms that result in explosive decompression.
Wen, Bin; Evans, David A. D.; Li, Yong-Xiang
2017-01-01
Recent reconstructions of the Rodinia supercontinent and its breakup incorporate South China as a ;missing link; between Australia and Laurentia, and place the Tarim craton adjacent to northwestern Australia on the supercontinent's periphery. However, subsequent kinematic evolution toward Gondwana amalgamation requires complex geometric shuffling between South China and Tarim, which cannot be easily resolved with the stratigraphic records of those blocks. Here we present new paleomagnetic data from early Ediacaran strata of northwest Tarim, and document large-scale rotation at near-constant paleolatitudes during Cryogenian time. The rotation is coeval with Rodinia breakup, and Tarim's paleolatitudes are compatible with its placement between Australia and Laurentia, either by itself as an alternative ;missing link; or joined with South China in that role. At the same time, indications of subduction-related magmatism in Tarim's Neoproterozoic record suggest that Rodinia breakup was dynamically linked to subduction retreat along its northern margin. Such a model is akin to early stages of Jurassic fragmentation within southern Gondwana, and implies more complicated subduction-related dynamics of supercontinent breakup than superplume impingement alone.
LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (IBM PC VERSION)
Manning, R. M.
1994-01-01
The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal
Manning, R. M.
1994-01-01
The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal
Quantitative assessment of changes in landslide risk using a regional scale run-out model
Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone
2015-04-01
The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors
Barua, Bobby; Islam, Md Rezwan
2012-01-01
Free space optics (FSO) is a promising solution for the need to very high data rate point-to point communication. FSO communication technology became popular due to its large bandwidth potential, unlicensed spectrum, excellent security and quick and inexpensive setup. Unfortunately, atmospheric turbulence-induced fading is one of the main impairments affecting FSO communications. To design a high performance communication link for the atmospheric FSO channel, it is of great importance to characterize the channel with proper model. In this paper, the modulation format is Q-ary PPM across lasers, with intensity modulation and ideal photodetectors are assumed to investigate the most efficient PDF models for FSO communication under turbulent condition. The performance results are evaluated in terms of symbol error probability (SEP) for different type of channel model and the simulation results confirm the analytical findings.
Energy Technology Data Exchange (ETDEWEB)
Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.
1998-12-01
The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.
Directory of Open Access Journals (Sweden)
Xian Xu
2015-01-01
Full Text Available A three-dimensional tensegrity structure is used as a computational model for cross-linked actin networks. The postbuckling behavior of the members under compression is considered and the constitutive relation of the postbuckling members is modeled as a second-order polynomial. A numerical scheme incorporating the equivalent constitution of the postbuckling members is used to predict the structural response of the tensegrity model under compression loads. The numerical simulation shows that the stiffness of the tensegrity structure nonlinearly increases before member buckling and abruptly decreases to a lower level as soon as members buckle. This result qualitatively mimics the experimentally observed stiffness to compression stress response of cross-linked actin networks. In order to take member length variety into account, a large number of simulations with the length of buckling members varying in the given range are also carried out. It is found that the mean response of the simulations using different buckling member length exhibits more resemblance to the experimental observation.
Probabilistic Model for Free-Space Optical Links Under Continental Fog Conditions
Directory of Open Access Journals (Sweden)
Marzuki
2010-09-01
Full Text Available The error characteristics of a free-space optical (FSO channel are signiﬁcantly different from the ﬁber based optical links and thus require a deep physical understanding of the propagation channel. In particular different fog conditions greatly inﬂuence the optical transmissions and thus a channel model is required to estimate the detrimental fog effects. In this paper we shall present the probabilistic model for radiation fog from the measured data over a 80 m FSO link installed at Graz, Austria. The fog events are classiﬁed into thick fog, moderate fog, light fog and general fog based on the international code of visibility range. We applied some probability distribution functions (PDFs such as Kumaraswamy, Johnson SB and Logistic distribution, to the actual measured optical attenuations. The performance of each distribution is evaluated by Q-Q and P-P plots. It is found that Kumaraswamy distribution is the best ﬁt for general fog, while Logistic distribution is the optimum choice for thick fog. On the other hand, Johnson SB distribution best ﬁts the moderate and light fog related measured attenuation data. The difference in these probabilistic models and the resultant variation in the received signal strength under different fog types needs to be considered in designing an efﬁcient FSO system.
Linking Simple Economic Theory Models and the Cointegrated Vector AutoRegressive Model
DEFF Research Database (Denmark)
Møller, Niels Framroze
This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its stru....... Further fundamental extensions and advances to more sophisticated theory models, such as those related to dynamics and expectations (in the structural relations) are left for future papers......This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its......, it is demonstrated how other controversial hypotheses such as Rational Expectations can be formulated directly as restrictions on the CVAR-parameters. A simple example of a "Neoclassical synthetic" AS-AD model is also formulated. Finally, the partial- general equilibrium distinction is related to the CVAR as well...
Modelling and Quantitative Analysis of LTRACK–A Novel Mobility Management Algorithm
Directory of Open Access Journals (Sweden)
Benedek Kovács
2006-01-01
Full Text Available This paper discusses the improvements and parameter optimization issues of LTRACK, a recently proposed mobility management algorithm. Mathematical modelling of the algorithm and the behavior of the Mobile Node (MN are used to optimize the parameters of LTRACK. A numerical method is given to determine the optimal values of the parameters. Markov chains are used to model both the base algorithm and the so-called loop removal effect. An extended qualitative and quantitative analysis is carried out to compare LTRACK to existing handover mechanisms such as MIP, Hierarchical Mobile IP (HMIP, Dynamic Hierarchical Mobility Management Strategy (DHMIP, Telecommunication Enhanced Mobile IP (TeleMIP, Cellular IP (CIP and HAWAII. LTRACK is sensitive to network topology and MN behavior so MN movement modelling is also introduced and discussed with different topologies. The techniques presented here can not only be used to model the LTRACK algorithm, but other algorithms too. There are many discussions and calculations to support our mathematical model to prove that it is adequate in many cases. The model is valid on various network levels, scalable vertically in the ISO-OSI layers and also scales well with the number of network elements.
Abram, Sean R; Hodnett, Benjamin L; Summers, Richard L; Coleman, Thomas G; Hester, Robert L
2007-06-01
We have developed Quantitative Circulatory Physiology (QCP), a mathematical model of integrative human physiology containing over 4,000 variables of biological interactions. This model provides a teaching environment that mimics clinical problems encountered in the practice of medicine. The model structure is based on documented physiological responses within peer-reviewed literature and serves as a dynamic compendium of physiological knowledge. The model is solved using a desktop, Windows-based program, allowing students to calculate time-dependent solutions and interactively alter over 750 parameters that modify physiological function. The model can be used to understand proposed mechanisms of physiological function and the interactions among physiological variables that may not be otherwise intuitively evident. In addition to open-ended or unstructured simulations, we have developed 30 physiological simulations, including heart failure, anemia, diabetes, and hemorrhage. Additional stimulations include 29 patients in which students are challenged to diagnose the pathophysiology based on their understanding of integrative physiology. In summary, QCP allows students to examine, integrate, and understand a host of physiological factors without causing harm to patients. This model is available as a free download for Windows computers at http://physiology.umc.edu/themodelingworkshop.
An integrated qualitative and quantitative modeling framework for computer‐assisted HAZOP studies
DEFF Research Database (Denmark)
Wu, Jing; Zhang, Laibin; Hu, Jinqiu
2014-01-01
and validated on a case study concerning a three‐phase separation process. The multilevel flow modeling (MFM) methodology is used to represent the plant goals and functions. First, means‐end analysis is used to identify and formulate the intention of the process design in terms of components, functions...... safety critical operations, its causes and consequences. The outcome is a qualitative hazard analysis of selected process deviations from normal operations and their consequences as input to a traditional HAZOP table. The list of unacceptable high risk deviations identified by the qualitative HAZOP...... analysis is used as input for rigorous analysis and evaluation by the quantitative analysis part of the framework. To this end, dynamic first‐principles modeling is used to simulate the system behavior and thereby complement the results of the qualitative analysis part. The practical framework for computer...
Formal modeling and quantitative evaluation for information system survivability based on PEPA
Institute of Scientific and Technical Information of China (English)
WANG Jian; WANG Hui-qiang; ZHAO Guo-sheng
2008-01-01
Survivability should be considered beyond security for information system. To assess system survivability accurately, for improvement, a formal modeling and analysis method based on stochastic process algebra is proposed in this article. By abstracting the interactive behaviors between intruders and information system, a transferring graph of system state oriented survivability is constructed. On that basis, parameters are defined and system behaviors are characterized precisely with performance evaluation process algebra (PEPA), simultaneously considering the influence of different attack modes. Ultimately the formal model for survivability is established and quantitative analysis results are obtained by PEPA Workbench tool. Simulation experiments show the effectiveness and feasibility of the developed method, and it can help to direct the designation of survivable system.
Quantitative model for the generic 3D shape of ICMEs at 1 AU
Démoulin, P; Masías-Meza, J J; Dasso, S
2016-01-01
Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs) while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, so along a 1D cut. As such, the data only give a partial view of their 3D structures. By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. In a first approach we theoretically obtain the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compare their compatibility with observed distributions. In a second approach we use the shock normal and the flux rope axis orientations, as well as the impact parameter, to provide statistical information across the spacecraft trajectory. The study of different 3D shock models shows that the observations are compatible with a ...
Model exploration and analysis for quantitative safety refinement in probabilistic B
Ndukwu, Ukachukwu; 10.4204/EPTCS.55.7
2011-01-01
The role played by counterexamples in standard system analysis is well known; but less common is a notion of counterexample in probabilistic systems refinement. In this paper we extend previous work using counterexamples to inductive invariant properties of probabilistic systems, demonstrating how they can be used to extend the technique of bounded model checking-style analysis for the refinement of quantitative safety specifications in the probabilistic B language. In particular, we show how the method can be adapted to cope with refinements incorporating probabilistic loops. Finally, we demonstrate the technique on pB models summarising a one-step refinement of a randomised algorithm for finding the minimum cut of undirected graphs, and that for the dependability analysis of a controller design.
Hasegawa, K; Funatsu, K
2000-01-01
Quantitative structure-activity relationship (QSAR) studies based on chemometric techniques are reviewed. Partial least squares (PLS) is introduced as a novel robust method to replace classical methods such as multiple linear regression (MLR). Advantages of PLS compared to MLR are illustrated with typical applications. Genetic algorithm (GA) is a novel optimization technique which can be used as a search engine in variable selection. A novel hybrid approach comprising GA and PLS for variable selection developed in our group (GAPLS) is described. The more advanced method for comparative molecular field analysis (CoMFA) modeling called GA-based region selection (GARGS) is described as well. Applications of GAPLS and GARGS to QSAR and 3D-QSAR problems are shown with some representative examples. GA can be hybridized with nonlinear modeling methods such as artificial neural networks (ANN) for providing useful tools in chemometric and QSAR.
Estimation of financial loss ratio for E-insurance:a quantitative model
Institute of Scientific and Technical Information of China (English)
钟元生; 陈德人; 施敏华
2002-01-01
In view of the risk of E-commerce and the response of the insurance industry to it, this paper is aimed at one important point of insurance, that is, estimation of financial loss ratio, which is one of the most difficult problems facing the E-insurance industry. This paper proposes a quantitative analyzing model for estimating E-insurance financial loss ratio. The model is based on gross income per enterprise and CSI/FBI computer crime and security survey. The analysis results presented are reasonable and valuable for both insurer and the insured and thus can be accepted by both of them. What we must point out is that according to our assumption, the financial loss ratio varied very little, 0.233% in 1999 and 0.236% in 2000 although there was much variation in the main data of the CSI/FBI survey.
Institute of Scientific and Technical Information of China (English)
H.P. Zhu; Z.Y. Zhou; Q.F. Hou; A.B. YU
2011-01-01
Two approaches are widely used to describe particle systems:the continuum approach at macroscopic scale and the discrete approach at particle scale,Each has its own advantages and disadvantages in the modelling of particle systems.It is of paramount significance to develop a theory to overcome the disadvantages of the two approaches.Averaging method to link the discrete to continuum approach is a potential technique to develop such a theory.This paper introduces an averaging method,including the theory and its application to the particle flow in a hopper and the particle-fluid flow in an ironmaking blast furnace.
Modeling channel interference in an orbital angular momentum-multiplexed laser link
Anguita, Jaime A.; Neifeld, Mark A.; Vasic, Bane V.
2009-08-01
We study the effects of optical turbulence on the energy crosstalk among constituent orbital angular momentum (OAM) states in a vortex-based multi-channel laser communication link and determine channel interference in terms of turbulence strength and OAM state separation. We characterize the channel interference as a function of C2n and transmit OAM state, and propose probability models to predict the random fluctuations in the received signals for such architecture. Simulations indicate that turbulence-induced channel interference is mutually correlated across receive channels.
Study of the linked dipole chain model in heavy quark production at the Tevatron
Energy Technology Data Exchange (ETDEWEB)
Lipatov, Artem V. [Physical Department, M.V. Lomonosov Moscow State University, Moscow (Russian Federation)]. E-mail: lipatov@theory.sinp.msu.ru; Leif Loennblad [Dept. of Theoretical Physics, Lund (Sweden)]. E-mail: Leif.Lonnblad@thep.lu.se; Zotov, Nikolai P. [D.V. Skobeltsyn Institute of Nuclear Physics, M.V. Lomonosov Moscow State University, Moscow (Russian Federation)]. E-mail: zotov@theory.sinp.msu.ru
2004-01-01
We present calculations of charm and beauty production at Tevatron within the framework of k{sub T} -factorization, using the unintegrated gluon distributions as obtained from the Linked Dipole Chain model. The analysis covers transverse momentum and rapidity distributions and the azimuthal correlations between b and b-bar quarks (or rather muons from their decay) which are powerful tests for the different unintegrated gluon distributions. We compare the theoretical results with recent experimental data taken by D{phi} and CDF collaborations at the Tevatron Run I and II. (author)
Supporting Consistency in Linked Specialized Engineering Models Through Bindings and Updating
Institute of Scientific and Technical Information of China (English)
Albertus H. Olivier; Gert C. van Rooyen; Berthold Firmenich; Karl E. Beucke
2008-01-01
Currently, some commercial software applications support users to work in an integrated environ-ment. However, this is limited to the suite of models provided by the software vendor and consequently it forces all the parties to use the same software. In contrast, the research described in this paper investigates ways of using standard software applications, which may be specialized for different professional domains.These are linked for effective transfer of information and a binding mechanism is provided to support consis-tency. The proposed solution was implemented using a CAD application and an independent finite element application in order to verify the theoretical aspects of this work.
Performance evaluation of generalized M-modeled atmospheric optical communications links
DEFF Research Database (Denmark)
Lopez-Gonzalez, Francisco J.; Garrido-Balsellss, José María; Jurado-Navas, Antonio;
2016-01-01
, the behavior of the atmospheric optical channel is treated as a superposition of a finite number of Generalized-K distributed sub-channels, controlled by a discrete Negative-Binomial distribution dependent on the turbulence parameters. Unlike other studies, here, the closed-form mathematical expressions......In this paper, the performance analysis of atmospheric optical communications links is analyzed in terms of the average bit error rate. To this end, the optical irradiance scintillation due to the turbulence effects is modeled by a generalization of the M´alaga or M distribution. In particular...
Directory of Open Access Journals (Sweden)
Cağlar Melda
2006-02-01
Full Text Available Abstract Background The aim of present study is to investigate the short and long term histopathological alterations caused by submucosal injection of gluteraldehyde cross-linked bovine collagen based on an experimental rat model. Methods Sixty Sprague-Dawley rats were assigned into two groups as group I and II each containing 30 rats. 0.1 ml of saline solution and 0.1 ml of gluteraldehyde cross-linked bovine collagen were injected into the submucosa of bladder of first (control and second groups, respectively. Both group I and II were further subdivided into 3 other groups as Group IA, IB, IC and Group IIA, IIB, IIC according to the sacrification period. Group IA and IIA, IB and IIB, IC and IIC rats (10 rats for each group were sacrificed 3, 6, and 12 months after surgical procedure, respectively. Two slides prepared from injection site of the bladder were evaluated completely for each rat by being unaware of the groups and at random by two independent senior pathologists to determine the fibroblast invasion, collagen formation, capillary ingrowth and inflammatory reaction. Additionally, randomized brain sections from each rat were also examined to detect migration of the injection material. The measurements were made using an ocular micrometer at ×10 magnification. The results were assessed using t-tests for paired and independent samples, with p Results Migration to the brain was not detected in any group. Significant histopathological changes in the gluteraldehyde cross-linked bovine collagen injected groups were fibroblast invasion in 93.3%, collagen formation in 73.3%, capillary ingrowth in 46.6%, inflamatory reaction in 20%. Conclusion We emphasize that the usage of gluteraldehyde cross-linked bovine collagen in children appears to be safe for endoscopic treatment of vesicoureteral reflux.
Directory of Open Access Journals (Sweden)
Claude Gérard
2014-01-01
Full Text Available Recently, a molecular pathway linking inflammation to cell transformation has been discovered. This molecular pathway rests on a positive inflammatory feedback loop between NF-κB, Lin28, Let-7 microRNA and IL6, which leads to an epigenetic switch allowing cell transformation. A transient activation of an inflammatory signal, mediated by the oncoprotein Src, activates NF-κB, which elicits the expression of Lin28. Lin28 decreases the expression of Let-7 microRNA, which results in higher level of IL6 than achieved directly by NF-κB. In turn, IL6 can promote NF-κB activation. Finally, IL6 also elicits the synthesis of STAT3, which is a crucial activator for cell transformation. Here, we propose a computational model to account for the dynamical behavior of this positive inflammatory feedback loop. By means of a deterministic model, we show that an irreversible bistable switch between a transformed and a non-transformed state of the cell is at the core of the dynamical behavior of the positive feedback loop linking inflammation to cell transformation. The model indicates that inhibitors (tumor suppressors or activators (oncogenes of this positive feedback loop regulate the occurrence of the epigenetic switch by modulating the threshold of inflammatory signal (Src needed to promote cell transformation. Both stochastic simulations and deterministic simulations of a heterogeneous cell population suggest that random fluctuations (due to molecular noise or cell-to-cell variability are able to trigger cell transformation. Moreover, the model predicts that oncogenes/tumor suppressors respectively decrease/increase the robustness of the non-transformed state of the cell towards random fluctuations. Finally, the model accounts for the potential effect of competing endogenous RNAs, ceRNAs, on the dynamics of the epigenetic switch. Depending on their microRNA targets, the model predicts that ceRNAs could act as oncogenes or tumor suppressors by regulating
Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.
Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong
2015-05-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case.
A quantitative quasispecies theory-based model of virus escape mutation under immune selection.
Woo, Hyung-June; Reifman, Jaques
2012-08-07
Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although fundamental aspects of such a balance of mutation and selection pressure have been established by the quasispecies theory decades ago, its implications have largely remained qualitative. Here, we present a quantitative approach to model the virus evolution under cytotoxic T-lymphocyte immune response. The virus quasispecies dynamics are explicitly represented by mutations in the combined sequence space of a set of epitopes within the viral genome. We stochastically simulated the growth of a viral population originating from a single wild-type founder virus and its recognition and clearance by the immune response, as well as the expansion of its genetic diversity. Applied to the immune escape of a simian immunodeficiency virus epitope, model predictions were quantitatively comparable to the experimental data. Within the model parameter space, we found two qualitatively different regimes of infectious disease pathogenesis, each representing alternative fates of the immune response: It can clear the infection in finite time or eventually be overwhelmed by viral growth and escape mutation. The latter regime exhibits the characteristic disease progression pattern of human immunodeficiency virus, while the former is bounded by maximum mutation rates that can be suppressed by the immune response. Our results demonstrate that, by explicitly representing epitope mutations and thus providing a genotype-phenotype map, the quasispecies theory can form the basis of a detailed sequence-specific model of real-world viral pathogens evolving under immune selection.
Wang, Ling; Zhao, Geng-xing; Zhu, Xi-cun; Lei, Tong; Dong, Fang
2010-10-01
Hyperspectral technique has become the basis of quantitative remote sensing. Hyperspectrum of apple tree canopy at prosperous fruit stage consists of the complex information of fruits, leaves, stocks, soil and reflecting films, which was mostly affected by component features of canopy at this stage. First, the hyperspectrum of 18 sample apple trees with reflecting films was compared with that of 44 trees without reflecting films. It could be seen that the impact of reflecting films on reflectance was obvious, so the sample trees with ground reflecting films should be separated to analyze from those without ground films. Secondly, nine indexes of canopy components were built based on classified digital photos of 44 apple trees without ground films. Thirdly, the correlation between the nine indexes and canopy reflectance including some kinds of conversion data was analyzed. The results showed that the correlation between reflectance and the ratio of fruit to leaf was the best, among which the max coefficient reached 0.815, and the correlation between reflectance and the ratio of leaf was a little better than that between reflectance and the density of fruit. Then models of correlation analysis, linear regression, BP neural network and support vector regression were taken to explain the quantitative relationship between the hyperspectral reflectance and the ratio of fruit to leaf with the softwares of DPS and LIBSVM. It was feasible that all of the four models in 611-680 nm characteristic band are feasible to be used to predict, while the model accuracy of BP neural network and support vector regression was better than one-variable linear regression and multi-variable regression, and the accuracy of support vector regression model was the best. This study will be served as a reliable theoretical reference for the yield estimation of apples based on remote sensing data.
Linking market interaction intensity of 3D Ising type financial model with market volatility
Fang, Wen; Ke, Jinchuan; Wang, Jun; Feng, Ling
2016-11-01
Microscopic interaction models in physics have been used to investigate the complex phenomena of economic systems. The simple interactions involved can lead to complex behaviors and help the understanding of mechanisms in the financial market at a systemic level. This article aims to develop a financial time series model through 3D (three-dimensional) Ising dynamic system which is widely used as an interacting spins model to explain the ferromagnetism in physics. Through Monte Carlo simulations of the financial model and numerical analysis for both the simulation return time series and historical return data of Hushen 300 (HS300) index in Chinese stock market, we show that despite its simplicity, this model displays stylized facts similar to that seen in real financial market. We demonstrate a possible underlying link between volatility fluctuations of real stock market and the change in interaction strengths of market participants in the financial model. In particular, our stochastic interaction strength in our model demonstrates that the real market may be consistently operating near the critical point of the system.
Liu, L.; Hu, J.; Zhou, Q.
2016-12-01
The rapid accumulation of geophysical and geological data sets poses an increasing demand for the development of geodynamic models to better understand the evolution of the solid Earth. Consequently, the earlier qualitative physical models are no long satisfying. Recent efforts are focusing on more quantitative simulations and more efficient numerical algorithms. Among these, a particular line of research is on the implementation of data-oriented geodynamic modeling, with the purpose of building an observationally consistent and physically correct geodynamic framework. Such models could often catalyze new insights into the functioning mechanisms of the various aspects of plate tectonics, and their predictive nature could also guide future research in a deterministic fashion. Over the years, we have been working on constructing large-scale geodynamic models with both sequential and variational data assimilation techniques. These models act as a bridge between different observational records, and the superposition of the constraining power from different data sets help reveal unknown processes and mechanisms of the dynamics of the mantle and lithosphere. We simulate the post-Cretaceous subduction history in South America using a forward (sequential) approach. The model is constrained using past subduction history, seafloor age evolution, tectonic architecture of continents, and the present day geophysical observations. Our results quantify the various driving forces shaping the present South American flat slabs, which we found are all internally torn. The 3-D geometry of these torn slabs further explains the abnormal seismicity pattern and enigmatic volcanic history. An inverse (variational) model simulating the late Cenozoic western U.S. mantle dynamics with similar constraints reveals a different mechanism for the formation of Yellowstone-related volcanism from traditional understanding. Furthermore, important insights on the mantle density and viscosity structures
Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data.
Directory of Open Access Journals (Sweden)
Alexey A Gritsenko
2015-08-01
Full Text Available Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP, a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates.
Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data.
Gritsenko, Alexey A; Hulsman, Marc; Reinders, Marcel J T; de Ridder, Dick
2015-08-01
Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates.
Quantitative Agent Based Model of Opinion Dynamics: Polish Elections of 2015.
Directory of Open Access Journals (Sweden)
Pawel Sobkowicz
Full Text Available We present results of an abstract, agent based model of opinion dynamics simulations based on the emotion/information/opinion (E/I/O approach, applied to a strongly polarized society, corresponding to the Polish political scene between 2005 and 2015. Under certain conditions the model leads to metastable coexistence of two subcommunities of comparable size (supporting the corresponding opinions-which corresponds to the bipartisan split found in Poland. Spurred by the recent breakdown of this political duopoly, which occurred in 2015, we present a model extension that describes both the long term coexistence of the two opposing opinions and a rapid, transitory change due to the appearance of a third party alternative. We provide quantitative comparison of the model with the results of polls and elections in Poland, testing the assumptions related to the modeled processes and the parameters used in the simulations. It is shown, that when the propaganda messages of the two incumbent parties differ in emotional tone, the political status quo may be unstable. The asymmetry of the emotions within the support bases of the two parties allows one of them to be 'invaded' by a newcomer third party very quickly, while the second remains immune to such invasion.
Quantitative MR application in depression model of rats: a preliminary study
Institute of Scientific and Technical Information of China (English)
Wei Wang; Wenxun Li; Fang Fang; Hao Lei; Xiaoping Yin; Jianpin Qi; Baiseng Wang; Chengyuan Wang
2005-01-01
Objective: To investigate findings and value of quantitative MR in depression model of rats. Methods: Twenty male SD rats were divided into model group and control group randomly (10 rats in each group). The depression model of rats was erected by separation and chronic unpredictable stress. The behavior of rat was detected by open-field test and sucrose consumption. The MR images of brain tissues were produced in vivo rats with T2-and diffusion-weighted imaging. The changes of body weight and behavior score and thevalues of T2 and ADC of ROIs were compared between the two groups. Histological verification of hippocampal neuron damage was alsoperformed under ultramicrosopy. Results: Compared with the control group, T2 values in hippocampus prolonged 5.5 % ( P ＜ 0.05),ADC values in hippocampus and in temporal lobe cortex decreased 11.7 % and 10.9% (P ＜ 0.01)respectively in the model group. Histo-logic data confirmed severe neuronal damage in the hippocampus of the model group. Conclusion: This study capitalized on diffusion-weighted imaging as a sensitive technique for the identification of neuronal damage in depression and it provides an experimental evidence ofMRI in depression investigation and clinical application.
Directory of Open Access Journals (Sweden)
Aaron Smith
2014-12-01
Full Text Available The accurate characterization of three-dimensional (3D root architecture, volume, and biomass is important for a wide variety of applications in forest ecology and to better understand tree and soil stability. Technological advancements have led to increasingly more digitized and automated procedures, which have been used to more accurately and quickly describe the 3D structure of root systems. Terrestrial laser scanners (TLS have successfully been used to describe aboveground structures of individual trees and stand structure, but have only recently been applied to the 3D characterization of whole root systems. In this study, 13 recently harvested Norway spruce root systems were mechanically pulled from the soil, cleaned, and their volumes were measured by displacement. The root systems were suspended, scanned with TLS from three different angles, and the root surfaces from the co-registered point clouds were modeled with the 3D Quantitative Structure Model to determine root architecture and volume. The modeling procedure facilitated the rapid derivation of root volume, diameters, break point diameters, linear root length, cumulative percentages, and root fraction counts. The modeled root systems underestimated root system volume by 4.4%. The modeling procedure is widely applicable and easily adapted to derive other important topological and volumetric root variables.
A bivariate quantitative genetic model for a linear Gaussian trait and a survival trait
Directory of Open Access Journals (Sweden)
Damgaard Lars
2005-12-01
Full Text Available Abstract With the increasing use of survival models in animal breeding to address the genetic aspects of mainly longevity of livestock but also disease traits, the need for methods to infer genetic correlations and to do multivariate evaluations of survival traits and other types of traits has become increasingly important. In this study we derived and implemented a bivariate quantitative genetic model for a linear Gaussian and a survival trait that are genetically and environmentally correlated. For the survival trait, we considered the Weibull log-normal animal frailty model. A Bayesian approach using Gibbs sampling was adopted. Model parameters were inferred from their marginal posterior distributions. The required fully conditional posterior distributions were derived and issues on implementation are discussed. The twoWeibull baseline parameters were updated jointly using a Metropolis-Hastingstep. The remaining model parameters with non-normalized fully conditional distributions were updated univariately using adaptive rejection sampling. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation of data. In conclusion, the proposed method allows inferring additive genetic and environmental correlations, and doing multivariate genetic evaluation of a linear Gaussian trait and a survival trait.
A bivariate quantitative genetic model for a linear Gaussian trait and a survival trait.
Damgaard, Lars Holm; Korsgaard, Inge Riis
2006-01-01
With the increasing use of survival models in animal breeding to address the genetic aspects of mainly longevity of livestock but also disease traits, the need for methods to infer genetic correlations and to do multivariate evaluations of survival traits and other types of traits has become increasingly important. In this study we derived and implemented a bivariate quantitative genetic model for a linear Gaussian and a survival trait that are genetically and environmentally correlated. For the survival trait, we considered the Weibull log-normal animal frailty model. A Bayesian approach using Gibbs sampling was adopted. Model parameters were inferred from their marginal posterior distributions. The required fully conditional posterior distributions were derived and issues on implementation are discussed. The two Weibull baseline parameters were updated jointly using a Metropolis-Hasting step. The remaining model parameters with non-normalized fully conditional distributions were updated univariately using adaptive rejection sampling. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation of data. In conclusion, the proposed method allows inferring additive genetic and environmental correlations, and doing multivariate genetic evaluation of a linear Gaussian trait and a survival trait.